sentences
sequence | labels
sequence |
---|---|
[
"Recent work has shown that visual context improves cross-lingual sense disambiguation for nouns.",
"We extend this line of work to the more challenging task of cross-lingual verb sense disambiguation, introducing the MultiSense dataset of 9,504 images annotated with English, German, and Spanish verbs.",
"Each image in MultiSense is annotated with an English verb and its translation in German or Spanish.",
"We show that cross-lingual verb sense disambiguation models benefit from visual context, compared to unimodal baselines.",
"We also show that the verb sense predicted by our best disambiguation model can improve the results of a text-only machine translation system when used for a multimodal translation task.",
"Resolving lexical ambiguity remains one of the most challenging problems in natural language processing.",
"It is often studied as a word sense disambiguation (WSD) problem, which is the task of assigning the correct sense to a word in a given context (Kilgarrif, 1998).",
"Word sense disambiguation is typically tackled using only textual context ; however, in a multimodal setting, visual context is also available and can be used for disambiguation.",
"Most prior work on visual word sense disambiguation has targeted noun senses (Barnard and Johnson, 2005; Loeff et al., 2006; Saenko and Darrell, 2008), but the task has recently been extended to verb senses (Gella et al., 2016, 2019).",
"Resolving sense ambiguity is particularly crucial for translation tasks, as words can have more than one translation, and these translations often correspond to word senses (Carpuat and Wu, 2007; Nav-igli, 2009).",
"As an example consider the verb ride , which can translate into German as fahren (ride a bike) or reiten (ride a horse).",
"Recent work on multimodal machine translation has partly addressed Source: Three guys riding on an elephant.",
"lexical ambiguity by using visual information, but it still remains unresolved especially for the part-of-speech categories such as verbs (Specia et al., 2016; Shah et al., 2016; Hitschler et al., 2016; Lala and Specia, 2018).",
"Prior work on cross-lingual WSD has been limited in scale and has only employed textual context (Lefever and Hoste, 2013), even though the task should benefit from visual context, just like monolingual WSD.",
"Visual information has been shown to be useful to map words across languages for bilingual lexicon induction.",
"For this, images are used as a pivot between languages or visual information is combined with cross-lingual vector spaces to learn word translations across languages (Bergsma and Van Durme, 2011; Kiela et al., 2015; Vulic et al., 2016).",
"However, as with other grounding or word similarity tasks, bilingual lexicon induction has so far mainly targeted nouns and these approaches was shown to perform poorly for other word categories such as verbs.",
"Recent work by Gella et al. (2017) and Kadar et al. (2018) has shown using image as pivot between languages can lead to better multilingual multimodal representations and can have successful applications in crosslingual retrieval and multilingual image retrieval.",
"In this paper, we introduce the MultiSense dataset of 9,504 images annotated with English verbs and their translations in German and Spanish.",
"For each image in MultiSense, the English verb is translation-ambiguous, i.e., it has more than one possible translation in German or Spanish.",
"We propose a series of disambiguation models that, given an image and an English verb, select the correct translation of the verb.",
"We apply our models on MultiSense and find that multimodal models that fuse textual context with visual features outperform unimodal models, confirming our hypothesis that cross-lingual WSD benefits from visual context.",
"Cross-lingual WSD also has a clear application in machine translation.",
"Determining the correct sense of a verb is important for high quality translation output, and sometimes text-only translation systems fail when the correct translation would be obvious from visual information (see Figure 1).",
"To show that cross-lingual visual sense disambiguation can improve the performance of translation systems, we annotate a part of our MultiSense dataset with English image descriptions and their German translations.",
"There are two existing multimodal translation evaluation sets with ambiguous words: the Ambiguous COCO dataset (Elliott et al., 2017) contains sentences that are possibly ambiguous, and the Multimodal Lexical Translation dataset is restricted to predicting single words instead of full sentences (Lala and Specia, 2018).",
"This type of resource is important for multimodal translation because it is known that humans use visual context to resolve ambiguities for nouns and gender-neutral words (Frank et al., 2018).",
"MultiSense contains sentences that are known to have ambiguities, and it allows for sentence-level and verb prediction evaluation.",
"Here, we use the verbs predicted by our visual sense disambiguation model to constrain the output of a neural translation system and demonstrate a clear improvement in Meteor, BLEU, and verb accuracy over a text-only baseline.",
"Images Paired with Verb Translations The MultiSense dataset pairs sense-ambiguous English verbs with images as visual context and contextually appropriate German and Spanish translations.",
"Table 1 shows examples of images taken from MultiSense with their Spanish and German translations.",
"To compile the dataset, we first chose a set of English verbs which had multiple translations into German and Spanish in Wiktionary, an online dictionary.",
"Then we retrieved 150 candidate images from Google Images using queries that included the target English verb.",
"We constructed the verb phrases by extracting the 100 most frequent phrases for each verb from the English Google syntactic n-grams dataset (Lin et al., 2012), which we then manually filtered to remove redundancies, resulting in 10 phrases per verb.",
"Examples of verb phrases for blow include blowing hair , blowing a balloon , and blowing up a bomb .",
"We filtered the candidate images using crowdworkers on Amazon Mechanical Turk, who were asked to remove images that were irrelevant to the verb phrase query.",
"Overall pairwise agreement for this image filtering task was 0.763.",
"Finally, we employed native German and Spanish speakers to translate the verbs into their language, given the additional visual context.",
"This resulted in a dataset of 9,504 images, covering 55 English verbs with 154 and 136 unique translations in German and Spanish, respectively.",
"We divided the dataset into 75% training, 10% validation and 15% test splits.",
"Sentence-level Translations We also annotated a subset of MultiSense with sentence-level translations for English and German.",
"This subset contains 995 imageEnglish descriptionGerman translation tuples that can be used to evaluate the verb sense disambiguation capabilities of multimodal translation models.",
"We collected the data in four-steps: (1) crowdsource English descriptions of the images using the gold-standard MultiSense verb as a prompt; (2) manually post-edit the English descriptions to ensure they contain the correct verb; (3) crowdsource German translations, given the English descriptions, the German gold-standard MultiSense verb, and the image; (4) manually post-edit the German translations to ensure they contain the correct verb.",
"Figure 1 shows an example of an image paired with its English description and German translation.",
"We propose three models for cross-lingual verb sense disambiguation, based on the visual input, the textual input, or using both inputs.",
"Each model is trained to minimize the negative log probability of predicting the correct verb translation.",
"Visual features have been shown to be useful for learning semantic representations of words (Lazari-dou et al., 2015), bilingual lexicon learning (Kiela et al., 2015), and visual sense disambiguation (Gella et al., 2016), amongst others.",
"We propose a model that learns to predict the verb translation using only visual input.",
"Given an image I , we extract a fixed feature vector from a Convolutional Neural Network, and project it into a hidden layer h v with the learned matrix W i R h 512 (Eqn. 1).",
"The hidden layer is projected into the output vocabulary of v verbs using the learned matrix W o R h v , and normalized into a probability distribution using a softmax transformation (Eqn. 2).",
"Each image in MultiSense is associated with the query phrase that was used to retrieve it.",
"Given a query phrase with N words, we embed each word as a d -dimensional dense vector, and represent the phrase as the average of its embeddings E. We then project the query representation into a hidden layer with the learned matrix W q R h d (Eqn. 3).",
"The hidden layer is projected into an output layer and normalized to a probability distribution, in the same manner as the unimodal visual model.",
"We also propose a multimodal model that integrates the visual and textual features to predict the correct verb sense.",
"In our multimodal model, we concatenate the inputs together before projecting Chance Majority Text Image MM German 0.7 2.8 49.1 52.1 55.6 Spanish 0.7 4.0 52.7 50.3 56.0 Table 2: Cross-lingual verb sense disambiguation accuracy of our unimodal models and the multimodal model.",
"them into a hidden layer with a learned matrix W h R h ( 512 + h ) (Eqn. 4).",
"We follow the same steps as the unimodal models to project the multimodal hidden layers into the output label space.",
"Our experiments are designed to determine whether the integration of textual and visual features yields better cross-lingual verb sense disambiguation than unimodal models.",
"We embed the textual queries using pre-trained d = 300 dimension word2vec embeddings (Mikolov et al., 2013).",
"We represent images in the visual model using the features extracted from the 512D pool5 layer of a pre-trained ResNet-34 CNN (He et al., 2016).",
"All our models have a h = 128 dimension hidden layer.",
"The German models have an output vocabulary of v = 154 verbs, and the Spanish models have a vocabulary of v = 136 verbs.",
"All of our models are trained using SGD with mini-batches of 16 samples and a learning rate of 0.0001.",
"We evaluate the performance of our models by measuring the accuracy of the predicted verb against the gold standard.",
"We also compare against chance and majority label baselines.",
"Our preliminary experiments show that with better visual representation we achieve better acccuracy scores similar to others who observed better visual representation contributes to better downstream tasks such as image description (Fang et al., 2015), multimodal machine translation (Specia et al., 2016) and representation learning (Kadar et al., 2018).",
"We present the results in Table 2.",
"The chance and majority label baselines perform very poorly.",
"The unimodal textual model performs better than the Source A large herd of sheep is blocking the road.",
"unimodal visual model for German verb sense disambiguation, but we find the opposite for Spanish unimodal verb sense disambiguation.",
"However, the early fusion multimodal model outperforms the best unimodal model for both German and Spanish.",
"This confirms that cross-lingual verb sense disambiguation benefits from multimodal supervision compared to unimodal supervision.",
"We analyzed the outputs of our models in order to understand where multimodal features helped in identifying the correct verb translation and the cases where they failed.",
"In Figure 2, we show an example that illustrates how varying the input (tex-tual, visual, or multimodal) affects the accuracy of the verb prediction.",
"We show the top verb predicted by our models for both German and Spanish.",
"The top predicted verb using text-only visual features is incorrect.",
"The unimodal visual features model predicts the correct Spanish verb but the incorrect Meteor BLEU VAcc Baseline NMT 38.6 17.8 22.9 + Predicted Verb 40.0 18.5 49.5 + Oracle Verb 40.4 19.1 77.7 Caglayan et al. 46.1 25.8 29.3 Helcl & Libovicky 42.5 22.3 25.1 Table 4: Translation results: Meteor and BLEU are standard text-similarity metrics; verb accuracy (VAcc) counts how often the model proposal contains the gold standard German verb.",
"German verb.",
"However, when visual information is added to textual features, models in both the languages predict the correct label.",
"We also evaluate our verb sense disambiguation model in the challenging downstream task of multimodal machine translation (Specia et al., 2016).",
"We conduct this evaluation on the sentence-level translation subset of MultiSense.",
"We evaluate model performance using BLEU (Papineni et al., 2002) and Meteor scores (Denkowski and Lavie, 2014) between the MultiSense reference description and the translation model output.",
"We also evaluate the verb prediction accuracy of the output against the gold standard verb annotation.",
"Our baseline is an attention-based neural machine translation model (Hieber et al., 2017) trained on the 29,000 English-German sentences in Multi30k (Elliott et al., 2016).",
"We preprocessed the text with punctuation normalization, tokenization, and lowercasing.",
"We then learned a joint byte-pair-encoded vocabulary with 10,000 merge operations to reduce sparsity (Sennrich et al., 2016).",
"Our approach uses the German verb predicted by the unimodal visual model (Section 3.1) to constrain the output of the translation decoder (Post and Vilar, 2018).",
"This means that our approach does not directly use visual features, instead it uses the output of the visual verb sense disambiguation model to guide the translation process.",
"We compare our approach against two state-of-the-art multimodal translation systems: Caglayan et al. (2017) modulate the target language word embeddings by an element-wise multiplication with a learned transformation of the visual data; Helcl and Libovicky (2017) use a double attention model that learns to selectively attend to a combination of the source language and the visual data.",
"Table 4 shows the results of the translation experiment.",
"Overall, the Meteor scores are much lower than on the Multi30k test sets, where the state-of-the-art single model scores 51.6 Meteor points compared to 46.1 Meteor we obtained.",
"This gap is most likely due evaluating the models on an out-of-domain dataset with out-of-vocabulary tokens.",
"Using the predicted verb as a decoding constraint outperforms the text-only translation baseline by 1.4 Meteor points.",
"In addition, the translation output of our model contains the correct German verb 27% more often than the text-only baseline model.",
"These results show that a multimodal verb sense disambiguation model can improve translation quality in a multimodal setting.",
"We also calculated the upper bound of our approach by using the gold standard German verb as the lexical constraint.",
"In this oracle experiment we observed a further 0.4 Meteor point improvement over our best model, and a further 27% improvement in verb accuracy.",
"This shows that: (1) there are further improvements to be gained from improving the verb disambiguation model, and (2) the OOV rate in German means that we cannot achieve perfect verb accuracy.",
"We introduced the MultiSense dataset of 9,504 images annotated with an English verb and its translation in Spanish and German.",
"We proposed a range of cross-lingual visual sense disambiguation models and showed that multimodal models that fuse textual and visual features outperform unimodal models.",
"We also collected a set of image descriptions and their translations, and showed that the output of our cross-lingual WSD system boosts the performance of a text-only translation system on this data.",
"MultiSense is publicly available at https: //github.com/spandanagella/multisense Acknowledgements DE was supported by an Amazon Research Award.",
"This work was supported by the donation of a Titan Xp GPU by the NVIDIA Corporation."
] | [
"abstain",
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"result",
"other",
"abstain"
] |
[
"(cid:45)(cid:76)(cid:81)(cid:74)(cid:91)(cid:88)(cid:68)(cid:81) (cid:60)(cid:68)(cid:81)(cid:74)(cid:15) (cid:45)(cid:76)(cid:68)(cid:81)(cid:93)(cid:75)(cid:88)(cid:82) (cid:55)(cid:82)(cid:81)(cid:74)(cid:15) (cid:54)(cid:76) (cid:47)(cid:76) (cid:15) (cid:54)(cid:75)(cid:72)(cid:81)(cid:74) (cid:42)(cid:68)(cid:82)(cid:15) (cid:45)(cid:88)(cid:81) (cid:42)(cid:88)(cid:82)(cid:15) (cid:49)(cid:76)(cid:68)(cid:81)(cid:90)(cid:72)(cid:81) (cid:59)(cid:88)(cid:72) (cid:37)(cid:72)(cid:76)(cid:77)(cid:76)(cid:81)(cid:74) (cid:56)(cid:81)(cid:76)(cid:89)(cid:72)(cid:85)(cid:86)(cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:51)(cid:82)(cid:86)(cid:87)(cid:86) (cid:68)(cid:81)(cid:71) (cid:55)(cid:72)(cid:79)(cid:72)(cid:70)(cid:82)(cid:80)(cid:80)(cid:88)(cid:81)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)",
"(cid:51)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:68)(cid:85)(cid:72) (cid:82)(cid:73)(cid:87)(cid:72)(cid:81) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:86)(cid:72)(cid:81)(cid:16) (cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86)(cid:15) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:76)(cid:86) (cid:75)(cid:68)(cid:83)(cid:83)(cid:72)(cid:81)(cid:86) (cid:80)(cid:82)(cid:85)(cid:72) (cid:73)(cid:85)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:87)(cid:79)(cid:92) (cid:76)(cid:81) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:74)(cid:72)(cid:81)(cid:85)(cid:72)(cid:86) (cid:68)(cid:86) (cid:87)(cid:75)(cid:72)(cid:76)(cid:85) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87)(cid:86) (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:72)(cid:68)(cid:86)(cid:76)(cid:79)(cid:92) (cid:88)(cid:81)(cid:71)(cid:72)(cid:85)(cid:86)(cid:87)(cid:82)(cid:82)(cid:71) (cid:73)(cid:85)(cid:82)(cid:80) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:17) (cid:53)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:76)(cid:81)(cid:74) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:76)(cid:86) (cid:72)(cid:86)(cid:86)(cid:72)(cid:81)(cid:87)(cid:76)(cid:68)(cid:79) (cid:87)(cid:82) (cid:68)(cid:83)(cid:83)(cid:79)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:44)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:40)(cid:91)(cid:87)(cid:85)(cid:68)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:16) (cid:72)(cid:85)(cid:72)(cid:81)(cid:87)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72)(cid:86)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:81)(cid:72)(cid:72)(cid:71) (cid:87)(cid:82) (cid:69)(cid:72) (cid:85)(cid:72)(cid:16) (cid:86)(cid:82)(cid:79)(cid:89)(cid:72)(cid:71)(cid:15) (cid:82)(cid:85) (cid:48)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:72) (cid:55)(cid:85)(cid:68)(cid:81)(cid:86)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:90)(cid:75)(cid:72)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:76)(cid:86) (cid:87)(cid:75)(cid:72) (cid:86)(cid:82)(cid:88)(cid:85)(cid:70)(cid:72) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72)(cid:17) (cid:44)(cid:81) (cid:87)(cid:75)(cid:76)(cid:86) (cid:90)(cid:82)(cid:85)(cid:78)(cid:15) (cid:90)(cid:72) (cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87) (cid:68) (cid:81)(cid:82)(cid:89)(cid:72)(cid:79) (cid:72)(cid:81)(cid:71)(cid:16)(cid:87)(cid:82)(cid:16)(cid:72)(cid:81)(cid:71) (cid:81)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:87)(cid:82) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:76)(cid:81) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:71)(cid:68)(cid:87)(cid:68)(cid:17) (cid:50)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:76)(cid:86) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:68) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72)(cid:71) (cid:68)(cid:87)(cid:16) (cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:72)(cid:70)(cid:75)(cid:68)(cid:81)(cid:76)(cid:86)(cid:80) (cid:87)(cid:75)(cid:68)(cid:87) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87)(cid:86) (cid:82)(cid:73) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:88)(cid:87)(cid:76)(cid:79)(cid:76)(cid:93)(cid:76)(cid:81)(cid:74) (cid:69)(cid:82)(cid:87)(cid:75) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16) (cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:81)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:53)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:82)(cid:81) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:74)(cid:72)(cid:81)(cid:85)(cid:72)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:88)(cid:85) (cid:68)(cid:83)(cid:83)(cid:85)(cid:82)(cid:68)(cid:70)(cid:75) (cid:68)(cid:70)(cid:75)(cid:76)(cid:72)(cid:89)(cid:72)(cid:86) (cid:68) (cid:86)(cid:76)(cid:74)(cid:81)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:81)(cid:87) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:16) (cid:80)(cid:72)(cid:81)(cid:87) (cid:82)(cid:89)(cid:72)(cid:85) (cid:87)(cid:75)(cid:72) (cid:70)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:87) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:68)(cid:85)(cid:87)(cid:17)",
"(cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:76)(cid:86) (cid:68) (cid:83)(cid:85)(cid:82)(cid:16)(cid:71)(cid:85)(cid:82)(cid:83) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72)(cid:15) (cid:80)(cid:72)(cid:68)(cid:81)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:68)(cid:87) (cid:76)(cid:87) (cid:76)(cid:86) (cid:81)(cid:82)(cid:87) (cid:68)(cid:79)(cid:90)(cid:68)(cid:92)(cid:86) (cid:81)(cid:72)(cid:70)(cid:72)(cid:86)(cid:86)(cid:68)(cid:85)(cid:92) (cid:87)(cid:82) (cid:75)(cid:68)(cid:89)(cid:72) (cid:68)(cid:81) (cid:82)(cid:89)(cid:72)(cid:85)(cid:87) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:90)(cid:75)(cid:72)(cid:81) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:76)(cid:86) (cid:70)(cid:79)(cid:72)(cid:68)(cid:85) (cid:73)(cid:85)(cid:82)(cid:80) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:76)(cid:86) (cid:76)(cid:81) (cid:70)(cid:82)(cid:81)(cid:87)(cid:85)(cid:68)(cid:86)(cid:87) (cid:90)(cid:76)(cid:87)(cid:75) (cid:68) (cid:81)(cid:82)(cid:81)(cid:16)(cid:83)(cid:85)(cid:82)(cid:16)(cid:71)(cid:85)(cid:82)(cid:83) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:79)(cid:76)(cid:78)(cid:72) (cid:40)(cid:81)(cid:16) (cid:74)(cid:79)(cid:76)(cid:86)(cid:75)(cid:15) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:68)(cid:81) (cid:82)(cid:89)(cid:72)(cid:85)(cid:87) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:86) (cid:68)(cid:79)(cid:90)(cid:68)(cid:92)(cid:86) (cid:81)(cid:72)(cid:72)(cid:71)(cid:72)(cid:71)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:46)(cid:76)(cid:80) (cid:11)(cid:21)(cid:19)(cid:19)(cid:19)(cid:12) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:68)(cid:81) (cid:82)(cid:89)(cid:72)(cid:85)(cid:87) (cid:86)(cid:88)(cid:69)(cid:16) (cid:77)(cid:72)(cid:70)(cid:87) (cid:76)(cid:86) (cid:88)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81)(cid:79)(cid:92) (cid:76)(cid:81) (cid:25)(cid:23)(cid:8) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:70)(cid:68)(cid:86)(cid:72)(cid:86) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:90)(cid:75)(cid:76)(cid:79)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:83)(cid:72)(cid:85)(cid:70)(cid:72)(cid:81)(cid:87)(cid:68)(cid:74)(cid:72) (cid:76)(cid:86) (cid:82)(cid:89)(cid:72)(cid:85) (cid:28)(cid:25)(cid:8) (cid:76)(cid:81) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75)(cid:17)",
"(cid:40)(cid:89)(cid:72)(cid:81) (cid:76)(cid:81) (cid:83)(cid:85)(cid:82)(cid:16)(cid:71)(cid:85)(cid:82)(cid:83) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72)(cid:86) (cid:79)(cid:76)(cid:78)(cid:72) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72)(cid:15) (cid:83)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:68)(cid:85)(cid:72) (cid:82)(cid:80)(cid:76)(cid:87)(cid:87)(cid:72)(cid:71) (cid:87)(cid:82) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:71)(cid:72)(cid:74)(cid:85)(cid:72)(cid:72)(cid:86) (cid:76)(cid:81) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:74)(cid:72)(cid:81)(cid:85)(cid:72)(cid:86)(cid:17) (cid:51)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:68)(cid:85)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:80)(cid:82)(cid:85)(cid:72) (cid:82)(cid:73)(cid:87)(cid:72)(cid:81) (cid:76)(cid:81) (cid:76)(cid:81)(cid:16) (cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:79) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:74)(cid:72)(cid:81)(cid:85)(cid:72)(cid:86) (cid:87)(cid:75)(cid:68)(cid:81) (cid:76)(cid:81) (cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:79) (cid:74)(cid:72)(cid:81)(cid:16) (cid:85)(cid:72)(cid:86) (cid:79)(cid:76)(cid:78)(cid:72) (cid:81)(cid:72)(cid:90)(cid:86)(cid:90)(cid:76)(cid:85)(cid:72)(cid:17) (cid:53)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72)(cid:86)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:11)(cid:39)(cid:51)(cid:86)(cid:12) (cid:76)(cid:86) (cid:76)(cid:80)(cid:83)(cid:82)(cid:85)(cid:87)(cid:68)(cid:81)(cid:87) (cid:87)(cid:82) (cid:68)(cid:83)(cid:83)(cid:79)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:48)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:72) (cid:55)(cid:85)(cid:68)(cid:81)(cid:86)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:81)(cid:72)(cid:72)(cid:71) (cid:87)(cid:82) (cid:69)(cid:72) (cid:80)(cid:68)(cid:71)(cid:72) (cid:72)(cid:91)(cid:83)(cid:79)(cid:76)(cid:70)(cid:76)(cid:87) (cid:90)(cid:75)(cid:72)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:76)(cid:86) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:16) (cid:79)(cid:68)(cid:87)(cid:72)(cid:71) (cid:76)(cid:81)(cid:87)(cid:82) (cid:68) (cid:87)(cid:68)(cid:85)(cid:74)(cid:72)(cid:87) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:11)(cid:58)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:25)(cid:68)(cid:15)(cid:69)(cid:15) (cid:21)(cid:19)(cid:20)(cid:27)(cid:12) (cid:82)(cid:85) (cid:44)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:40)(cid:91)(cid:87)(cid:85)(cid:68)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:80)(cid:76)(cid:74)(cid:75)(cid:87) (cid:76)(cid:81)(cid:89)(cid:82)(cid:79)(cid:89)(cid:72) (cid:72)(cid:81)(cid:87)(cid:76)(cid:87)(cid:76)(cid:72)(cid:86) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:85)(cid:72)(cid:71) (cid:87)(cid:82) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72)(cid:86)(cid:72) (cid:39)(cid:51)(cid:86)(cid:17)",
"(cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:20)(cid:29) (cid:36)(cid:81) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:69)(cid:72)(cid:87)(cid:90)(cid:72)(cid:72)(cid:81) (cid:87)(cid:90)(cid:82) (cid:83)(cid:72)(cid:82)(cid:16) (cid:83)(cid:79)(cid:72)(cid:17) (cid:55)(cid:75)(cid:72) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:86) (cid:87)(cid:82) (cid:179)(cid:87)(cid:75)(cid:76)(cid:86) (cid:70)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:72)(cid:85)(cid:180) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:68)(cid:81)(cid:71) (cid:76)(cid:86) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:79)(cid:68)(cid:87)(cid:72)(cid:71) (cid:76)(cid:81)(cid:87)(cid:82) (cid:179)(cid:76)(cid:87)(cid:180) (cid:76)(cid:81) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75)(cid:17) (cid:44)(cid:87) (cid:81)(cid:72)(cid:72)(cid:71)(cid:86) (cid:87)(cid:82) (cid:69)(cid:72) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:72)(cid:71) (cid:73)(cid:85)(cid:82)(cid:80) (cid:87)(cid:75)(cid:72) (cid:90)(cid:76)(cid:71)(cid:72)(cid:85) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:17)",
"(cid:48)(cid:82)(cid:85)(cid:72) (cid:70)(cid:82)(cid:81)(cid:70)(cid:85)(cid:72)(cid:87)(cid:72)(cid:79)(cid:92)(cid:15) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:76)(cid:81)(cid:74) (cid:39)(cid:51)(cid:86) (cid:76)(cid:81)(cid:89)(cid:82)(cid:79)(cid:89)(cid:72)(cid:86) (cid:76)(cid:12) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:79)(cid:82)(cid:70)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:83)(cid:82)(cid:86)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:81) (cid:68) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:68) (cid:83)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:86) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71)(cid:15) (cid:68)(cid:81)(cid:71) (cid:76)(cid:76)(cid:12) (cid:71)(cid:72)(cid:87)(cid:72)(cid:85)(cid:80)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:87)(cid:92)(cid:83)(cid:72) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:87)(cid:75)(cid:68)(cid:87) (cid:76)(cid:86) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:76)(cid:86) (cid:76)(cid:79)(cid:79)(cid:88)(cid:86)(cid:87)(cid:85)(cid:68)(cid:87)(cid:72)(cid:71) (cid:76)(cid:81) (cid:41)(cid:76)(cid:74)(cid:16) (cid:88)(cid:85)(cid:72) (cid:20)(cid:15) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:72)(cid:71) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:68)(cid:85)(cid:72) (cid:76)(cid:81) (cid:83)(cid:68)(cid:85)(cid:72)(cid:81)(cid:87)(cid:75)(cid:72)(cid:86)(cid:76)(cid:86)(cid:17) (cid:37)(cid:82)(cid:87)(cid:75) (cid:76)(cid:81)(cid:86)(cid:87)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:82)(cid:73) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:85)(cid:72)(cid:83)(cid:79)(cid:68)(cid:70)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72) (cid:82)(cid:89)(cid:72)(cid:85)(cid:87) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:11)(cid:179)(cid:76)(cid:87)(cid:180)(cid:12)(cid:15) (cid:69)(cid:88)(cid:87) (cid:87)(cid:75)(cid:72)(cid:92) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85) (cid:87)(cid:82) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:87)(cid:75)(cid:76)(cid:81)(cid:74)(cid:86) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:39)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:92) (cid:76)(cid:86) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:73)(cid:85)(cid:82)(cid:80) (cid:93)(cid:72)(cid:85)(cid:82) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:11)(cid:61)(cid:51)(cid:12) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:11)(cid:38)(cid:75)(cid:72)(cid:81) (cid:68)(cid:81)(cid:71) (cid:49)(cid:74)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:30) (cid:60)(cid:76)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:26)(cid:15) (cid:21)(cid:19)(cid:20)(cid:27)(cid:12) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:73)(cid:82)(cid:70)(cid:88)(cid:86) (cid:76)(cid:86) (cid:82)(cid:81) (cid:85)(cid:72)(cid:16) (cid:86)(cid:82)(cid:79)(cid:89)(cid:76)(cid:81)(cid:74) (cid:68)(cid:81) (cid:68)(cid:81)(cid:68)(cid:83)(cid:75)(cid:82)(cid:85)(cid:76)(cid:70) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:87)(cid:82) (cid:76)(cid:87)(cid:86) (cid:68)(cid:81)(cid:87)(cid:72)(cid:70)(cid:72)(cid:71)(cid:72)(cid:81)(cid:87)(cid:15) (cid:68)(cid:86)(cid:16) (cid:86)(cid:88)(cid:80)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:83)(cid:82)(cid:86)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:93)(cid:72)(cid:85)(cid:82) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:86) (cid:68)(cid:79)(cid:85)(cid:72)(cid:68)(cid:71)(cid:92) (cid:71)(cid:72)(cid:87)(cid:72)(cid:85)(cid:80)(cid:76)(cid:81)(cid:72)(cid:71)(cid:17) (cid:43)(cid:72)(cid:85)(cid:72) (cid:90)(cid:72) (cid:71)(cid:82) (cid:81)(cid:82)(cid:87) (cid:68)(cid:87)(cid:87)(cid:72)(cid:80)(cid:83)(cid:87) (cid:87)(cid:82) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:89)(cid:72) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:87)(cid:82) (cid:76)(cid:87)(cid:86) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87)(cid:17) (cid:39)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:92) (cid:70)(cid:68)(cid:81) (cid:87)(cid:75)(cid:88)(cid:86) (cid:69)(cid:72) (cid:89)(cid:76)(cid:72)(cid:90)(cid:72)(cid:71) (cid:68)(cid:86) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:86)(cid:87)(cid:72)(cid:83) (cid:82)(cid:73) (cid:93)(cid:72)(cid:85)(cid:82) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:69)(cid:88)(cid:87) (cid:87)(cid:75)(cid:76)(cid:86) (cid:87)(cid:68)(cid:86)(cid:78) (cid:76)(cid:86) (cid:68)(cid:79)(cid:86)(cid:82) (cid:76)(cid:80)(cid:83)(cid:82)(cid:85)(cid:87)(cid:68)(cid:81)(cid:87) (cid:76)(cid:81) (cid:76)(cid:87)(cid:86) (cid:82)(cid:90)(cid:81) (cid:85)(cid:76)(cid:74)(cid:75)(cid:87)(cid:17) (cid:44)(cid:81) (cid:68)(cid:83)(cid:83)(cid:79)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:79)(cid:76)(cid:78)(cid:72) (cid:80)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:72) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:16) (cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:82)(cid:81)(cid:79)(cid:92) (cid:81)(cid:72)(cid:72)(cid:71) (cid:87)(cid:82) (cid:69)(cid:72) (cid:76)(cid:71)(cid:72)(cid:81)(cid:87)(cid:76)(cid:73)(cid:76)(cid:72)(cid:71) (cid:68)(cid:81)(cid:71) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:79)(cid:68)(cid:87)(cid:72)(cid:71) (cid:70)(cid:82)(cid:85)(cid:85)(cid:72)(cid:70)(cid:87)(cid:79)(cid:92) (cid:69)(cid:88)(cid:87) (cid:87)(cid:75)(cid:72)(cid:92) (cid:71)(cid:82) (cid:81)(cid:82)(cid:87) (cid:81)(cid:72)(cid:72)(cid:71) (cid:87)(cid:82) (cid:69)(cid:72) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:89)(cid:72)(cid:71)(cid:17)",
"(cid:55)(cid:85)(cid:68)(cid:71)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79)(cid:79)(cid:92) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:92) (cid:75)(cid:68)(cid:86)",
"(cid:69)(cid:72)(cid:72)(cid:81) (cid:73)(cid:82)(cid:85)(cid:80)(cid:88)(cid:79)(cid:68)(cid:87)(cid:72)(cid:71) (cid:68)(cid:86) (cid:68) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:79)(cid:68)(cid:69)(cid:72)(cid:79)(cid:76)(cid:81)(cid:74) (cid:83)(cid:85)(cid:82)(cid:69)(cid:79)(cid:72)(cid:80) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:72)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:85)(cid:72)(cid:70)(cid:72)(cid:76)(cid:89)(cid:72)(cid:86) (cid:68) (cid:87)(cid:68)(cid:74) (cid:87)(cid:75)(cid:68)(cid:87) (cid:76)(cid:81)(cid:71)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72)(cid:86) (cid:90)(cid:75)(cid:72)(cid:87)(cid:75)(cid:72)(cid:85) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:75)(cid:68)(cid:86) (cid:68) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:69)(cid:72)(cid:73)(cid:82)(cid:85)(cid:72) (cid:76)(cid:87) (cid:68)(cid:81)(cid:71) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:86) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71)(cid:17) (cid:60)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17) (cid:11)(cid:21)(cid:19)(cid:20)(cid:24)(cid:12) (cid:79)(cid:72)(cid:89)(cid:72)(cid:85)(cid:68)(cid:74)(cid:72)(cid:86) (cid:79)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79)(cid:15) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:88)(cid:68)(cid:79) (cid:68)(cid:81)(cid:71) (cid:86)(cid:92)(cid:81)(cid:16) (cid:87)(cid:68)(cid:70)(cid:87)(cid:76)(cid:70) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72)(cid:86) (cid:87)(cid:82) (cid:71)(cid:72)(cid:87)(cid:72)(cid:70)(cid:87) (cid:39)(cid:51)(cid:86) (cid:69)(cid:72)(cid:73)(cid:82)(cid:85)(cid:72) (cid:72)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:73)(cid:85)(cid:82)(cid:80) (cid:68) (cid:83)(cid:85)(cid:72)(cid:71)(cid:72)(cid:73)(cid:76)(cid:81)(cid:72)(cid:71) (cid:79)(cid:76)(cid:86)(cid:87) (cid:82)(cid:73) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:17) (cid:42)(cid:76)(cid:68)(cid:81)(cid:81)(cid:72)(cid:79)(cid:79)(cid:68) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17) (cid:11)(cid:21)(cid:19)(cid:20)(cid:26)(cid:12) (cid:88)(cid:87)(cid:76)(cid:79)(cid:76)(cid:93)(cid:72)(cid:86) (cid:68) (cid:79)(cid:76)(cid:81)(cid:72)(cid:68)(cid:85)(cid:16)(cid:70)(cid:75)(cid:68)(cid:76)(cid:81) (cid:70)(cid:82)(cid:81)(cid:71)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:85)(cid:68)(cid:81)(cid:16) (cid:71)(cid:82)(cid:80) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:11)(cid:38)(cid:53)(cid:41)(cid:12) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:72)(cid:85) (cid:87)(cid:82) (cid:86)(cid:76)(cid:80)(cid:88)(cid:79)(cid:87)(cid:68)(cid:81)(cid:72)(cid:82)(cid:88)(cid:86)(cid:79)(cid:92) (cid:83)(cid:85)(cid:72)(cid:16) (cid:71)(cid:76)(cid:70)(cid:87) (cid:87)(cid:75)(cid:72) (cid:83)(cid:82)(cid:86)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:86) (cid:90)(cid:72)(cid:79)(cid:79) (cid:68)(cid:86) (cid:87)(cid:75)(cid:72) (cid:83)(cid:72)(cid:85)(cid:86)(cid:82)(cid:81) (cid:81)(cid:88)(cid:80)(cid:69)(cid:72)(cid:85) (cid:82)(cid:73) (cid:68) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:79)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:68)(cid:81)(cid:71) (cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:70)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:16) (cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:55)(cid:75)(cid:72)(cid:86)(cid:72) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72)(cid:71)(cid:16)(cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:85)(cid:72)(cid:84)(cid:88)(cid:76)(cid:85)(cid:72) (cid:79)(cid:68)(cid:69)(cid:82)(cid:85)(cid:16)(cid:76)(cid:81)(cid:87)(cid:72)(cid:81)(cid:86)(cid:76)(cid:89)(cid:72) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72) (cid:72)(cid:81)(cid:74)(cid:76)(cid:81)(cid:72)(cid:72)(cid:85)(cid:76)(cid:81)(cid:74)(cid:17)",
"(cid:44)(cid:81) (cid:70)(cid:82)(cid:81)(cid:87)(cid:85)(cid:68)(cid:86)(cid:87)(cid:15) (cid:61)(cid:75)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17) (cid:11)(cid:21)(cid:19)(cid:20)(cid:25)(cid:12) (cid:88)(cid:86)(cid:72)(cid:86) (cid:68) (cid:80)(cid:88)(cid:79)(cid:87)(cid:76)(cid:16) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:83)(cid:72)(cid:85)(cid:70)(cid:72)(cid:83)(cid:87)(cid:85)(cid:82)(cid:81) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:81)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:87)(cid:82) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:72)(cid:79)(cid:76)(cid:80)(cid:76)(cid:81)(cid:68)(cid:87)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:81)(cid:72)(cid:72)(cid:71) (cid:73)(cid:82)(cid:85) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72) (cid:72)(cid:81)(cid:74)(cid:76)(cid:81)(cid:72)(cid:72)(cid:85)(cid:76)(cid:81)(cid:74)(cid:17) (cid:55)(cid:75)(cid:72) (cid:76)(cid:81)(cid:83)(cid:88)(cid:87) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72)(cid:76)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:76)(cid:86) (cid:68) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:70)(cid:82)(cid:81)(cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:72)(cid:71) (cid:69)(cid:92) (cid:70)(cid:82)(cid:81)(cid:16) (cid:70)(cid:68)(cid:87)(cid:72)(cid:81)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71) (cid:87)(cid:82)(cid:78)(cid:72)(cid:81)(cid:86) (cid:76)(cid:81) (cid:68) (cid:73)(cid:76)(cid:91)(cid:72)(cid:71)(cid:16)(cid:79)(cid:72)(cid:81)(cid:74)(cid:87)(cid:75) (cid:90)(cid:76)(cid:81)(cid:71)(cid:82)(cid:90)(cid:17) (cid:43)(cid:82)(cid:90)(cid:72)(cid:89)(cid:72)(cid:85)(cid:15) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:68) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:87)(cid:75)(cid:68)(cid:87) (cid:83)(cid:85)(cid:82)(cid:89)(cid:76)(cid:71)(cid:72)(cid:86) (cid:70)(cid:85)(cid:88)(cid:70)(cid:76)(cid:68)(cid:79) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82) (cid:71)(cid:72)(cid:87)(cid:72)(cid:85)(cid:80)(cid:76)(cid:81)(cid:72) (cid:87)(cid:75)(cid:72) (cid:76)(cid:71)(cid:72)(cid:81)(cid:87)(cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:68) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:86) (cid:87)(cid:92)(cid:83)(cid:76)(cid:70)(cid:68)(cid:79)(cid:79)(cid:92) (cid:73)(cid:82)(cid:88)(cid:81)(cid:71) (cid:82)(cid:88)(cid:87)(cid:86)(cid:76)(cid:71)(cid:72) (cid:87)(cid:75)(cid:72) (cid:79)(cid:82)(cid:70)(cid:68)(cid:79) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:15) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:88)(cid:86) (cid:70)(cid:68)(cid:81)(cid:81)(cid:82)(cid:87) (cid:69)(cid:72) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:79)(cid:92) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:68) (cid:90)(cid:76)(cid:81)(cid:71)(cid:82)(cid:90)(cid:16)(cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:80)(cid:88)(cid:79)(cid:87)(cid:76)(cid:16)(cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:83)(cid:72)(cid:85)(cid:70)(cid:72)(cid:83)(cid:16) (cid:87)(cid:85)(cid:82)(cid:81) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:76)(cid:81) (cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:20) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:16) (cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:15) (cid:87)(cid:75)(cid:72) (cid:87)(cid:75)(cid:76)(cid:85)(cid:71) (cid:83)(cid:72)(cid:85)(cid:86)(cid:82)(cid:81) (cid:86)(cid:76)(cid:81)(cid:74)(cid:88)(cid:79)(cid:68)(cid:85) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:11)(cid:179)(cid:76)(cid:87)(cid:180)(cid:12) (cid:76)(cid:86) (cid:11)(cid:179)(cid:70)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:72)(cid:85)(cid:180)(cid:12)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:68)(cid:83)(cid:83)(cid:72)(cid:68)(cid:85)(cid:86) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:89)(cid:72)(cid:85)(cid:92) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72)(cid:15) (cid:86)(cid:72)(cid:89)(cid:72)(cid:85)(cid:68)(cid:79) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:69)(cid:72)(cid:73)(cid:82)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:82)(cid:16)(cid:71)(cid:85)(cid:82)(cid:83) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:17) (cid:55)(cid:75)(cid:72)(cid:85)(cid:72)(cid:16) (cid:73)(cid:82)(cid:85)(cid:72)(cid:15) (cid:79)(cid:82)(cid:81)(cid:74)(cid:16)(cid:71)(cid:76)(cid:86)(cid:87)(cid:68)(cid:81)(cid:70)(cid:72) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:88)(cid:68)(cid:79) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:81)(cid:72)(cid:72)(cid:71)(cid:86) (cid:87)(cid:82) (cid:69)(cid:72) (cid:70)(cid:68)(cid:83)(cid:87)(cid:88)(cid:85)(cid:72)(cid:71) (cid:76)(cid:81) (cid:82)(cid:85)(cid:71)(cid:72)(cid:85) (cid:87)(cid:82) (cid:71)(cid:72)(cid:87)(cid:72)(cid:85)(cid:80)(cid:76)(cid:81)(cid:72) (cid:87)(cid:75)(cid:72) (cid:87)(cid:92)(cid:83)(cid:72) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:17)",
"(cid:44)(cid:81) (cid:87)(cid:75)(cid:76)(cid:86) (cid:83)(cid:68)(cid:83)(cid:72)(cid:85)(cid:15) (cid:90)(cid:72) (cid:71)(cid:72)(cid:86)(cid:70)(cid:85)(cid:76)(cid:69)(cid:72) (cid:68) (cid:81)(cid:82)(cid:89)(cid:72)(cid:79) (cid:49)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:39)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:51)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:53)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:92) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78)(cid:15) (cid:81)(cid:68)(cid:80)(cid:72)(cid:71) (cid:49)(cid:39)(cid:51)(cid:53) (cid:87)(cid:75)(cid:68)(cid:87) (cid:70)(cid:68)(cid:81) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:81) (cid:68) (cid:80)(cid:88)(cid:70)(cid:75) (cid:79)(cid:68)(cid:85)(cid:74)(cid:72)(cid:85) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:17) (cid:55)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:80)(cid:68)(cid:78)(cid:72)(cid:86) (cid:88)(cid:86)(cid:72) (cid:82)(cid:73) (cid:70)(cid:82)(cid:81)(cid:16) (cid:87)(cid:72)(cid:91)(cid:87)(cid:88)(cid:68)(cid:79) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:87) (cid:87)(cid:90)(cid:82) (cid:79)(cid:72)(cid:89)(cid:72)(cid:79)(cid:86) (cid:82)(cid:73) (cid:74)(cid:85)(cid:68)(cid:81)(cid:88)(cid:79)(cid:68)(cid:85)(cid:76)(cid:87)(cid:92)(cid:29) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:79)(cid:72)(cid:89)(cid:72)(cid:79)(cid:17) (cid:36)(cid:81) (cid:76)(cid:79)(cid:79)(cid:88)(cid:86)(cid:87)(cid:85)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:49)(cid:39)(cid:51)(cid:53) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:76)(cid:86) (cid:74)(cid:76)(cid:89)(cid:72)(cid:81) (cid:76)(cid:81) (cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:21)(cid:17) (cid:55)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:76)(cid:81)(cid:74) (cid:83)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86) (cid:76)(cid:86) (cid:76)(cid:80)(cid:83)(cid:79)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:68) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72)(cid:71) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:72)(cid:70)(cid:75)(cid:68)(cid:81)(cid:76)(cid:86)(cid:80)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:87)(cid:82)(cid:16) (cid:78)(cid:72)(cid:81) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:82)(cid:16)(cid:71)(cid:85)(cid:82)(cid:83) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:15) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:71)(cid:86) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72) (cid:76)(cid:81) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:86) (cid:80)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:72)(cid:71) (cid:69)(cid:92) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72)(cid:81) (cid:93)(cid:72)(cid:85)(cid:82)(cid:86) (cid:76)(cid:81) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:69)(cid:92) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:55)(cid:75)(cid:72) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:76)(cid:81)(cid:74) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:86) (cid:70)(cid:82)(cid:80)(cid:69)(cid:76)(cid:81)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:87)(cid:82) (cid:80)(cid:68)(cid:78)(cid:72) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:81)(cid:68)(cid:79) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:79)(cid:82)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:87)(cid:92)(cid:83)(cid:72) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:17) (cid:58)(cid:72) (cid:71)(cid:72)(cid:80)(cid:82)(cid:81)(cid:86)(cid:87)(cid:85)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:16) (cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:82)(cid:81) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:70)(cid:82)(cid:81)(cid:16)",
"(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:86) (cid:68)(cid:81)(cid:71) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:88)(cid:85) (cid:68)(cid:83)(cid:16) (cid:83)(cid:85)(cid:82)(cid:68)(cid:70)(cid:75) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:86) (cid:87)(cid:75)(cid:72) (cid:70)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:87) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:68)(cid:85)(cid:87) (cid:69)(cid:92) (cid:68) (cid:73)(cid:68)(cid:76)(cid:85)(cid:79)(cid:92) (cid:79)(cid:68)(cid:85)(cid:74)(cid:72) (cid:80)(cid:68)(cid:85)(cid:74)(cid:76)(cid:81)(cid:17)",
"(cid:58)(cid:72) (cid:68)(cid:79)(cid:86)(cid:82) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80) (cid:68)(cid:69)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:86)(cid:87)(cid:88)(cid:71)(cid:76)(cid:72)(cid:86) (cid:87)(cid:82) (cid:72)(cid:91)(cid:83)(cid:79)(cid:82)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:81)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87)(cid:86) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:17) (cid:58)(cid:72) (cid:86)(cid:75)(cid:82)(cid:90) (cid:87)(cid:75)(cid:68)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:86) (cid:80)(cid:82)(cid:85)(cid:72) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72) (cid:76)(cid:81) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:76)(cid:81)(cid:74) (cid:70)(cid:82)(cid:81)(cid:70)(cid:85)(cid:72)(cid:87)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:69)(cid:72)(cid:70)(cid:68)(cid:88)(cid:86)(cid:72) (cid:76)(cid:87) (cid:86)(cid:72)(cid:85)(cid:89)(cid:72)(cid:86) (cid:68)(cid:86) (cid:68) (cid:80)(cid:68)(cid:87)(cid:70)(cid:75)(cid:76)(cid:81)(cid:74) (cid:80)(cid:72)(cid:70)(cid:75)(cid:68)(cid:81)(cid:76)(cid:86)(cid:80) (cid:87)(cid:75)(cid:68)(cid:87) (cid:80)(cid:68)(cid:87)(cid:70)(cid:75)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:68)(cid:74)(cid:68)(cid:76)(cid:81)(cid:86)(cid:87) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:39)(cid:51) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87)(cid:17) (cid:44)(cid:81) (cid:70)(cid:82)(cid:81)(cid:87)(cid:85)(cid:68)(cid:86)(cid:87)(cid:15) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:16) (cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:70)(cid:82)(cid:81)(cid:86)(cid:76)(cid:71)(cid:72)(cid:85)(cid:72)(cid:71) (cid:68)(cid:86) (cid:68)(cid:81) (cid:68)(cid:88)(cid:91)(cid:76)(cid:79)(cid:76)(cid:68)(cid:85)(cid:92) (cid:87)(cid:82)(cid:82)(cid:79) (cid:73)(cid:82)(cid:85) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:68)(cid:70)(cid:70)(cid:88)(cid:85)(cid:68)(cid:70)(cid:92) (cid:69)(cid:92) (cid:73)(cid:76)(cid:79)(cid:87)(cid:72)(cid:85)(cid:76)(cid:81)(cid:74) (cid:82)(cid:88)(cid:87) (cid:76)(cid:85)(cid:85)(cid:72)(cid:79)(cid:72)(cid:89)(cid:68)(cid:81)(cid:87) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87)(cid:86)(cid:17) (cid:36)(cid:79)(cid:79) (cid:70)(cid:82)(cid:71)(cid:72) (cid:76)(cid:86) (cid:68)(cid:89)(cid:68)(cid:76)(cid:79)(cid:16) (cid:68)(cid:69)(cid:79)(cid:72) (cid:68)(cid:87) (cid:63)(cid:105)(cid:105)(cid:84)(cid:98)(cid:44)(cid:102)(cid:102)(cid:59)(cid:66)(cid:105)(cid:63)(cid:109)(cid:35)(cid:88)(cid:43)(cid:81)(cid:75)(cid:102)(cid:77)(cid:66)(cid:77)(cid:59)(cid:77)(cid:66)(cid:77)(cid:59)(cid:118)(cid:28)(cid:77)(cid:59)(cid:102) (cid:76)(cid:46)(cid:83)(cid:95) (cid:17)",
"(cid:50)(cid:88)(cid:85) (cid:70)(cid:82)(cid:81)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:68)(cid:85)(cid:72) (cid:86)(cid:88)(cid:80)(cid:80)(cid:68)(cid:85)(cid:76)(cid:93)(cid:72)(cid:71) (cid:68)(cid:86) (cid:73)(cid:82)(cid:79)(cid:79)(cid:82)(cid:90)(cid:86)(cid:29)",
"(cid:135) (cid:58)(cid:72) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72) (cid:68) (cid:81)(cid:82)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:16)(cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:81)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78) (cid:87)(cid:82) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:16) (cid:81)(cid:72)(cid:86)(cid:72) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:71)(cid:68)(cid:87)(cid:68) (cid:69)(cid:92) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72)(cid:76)(cid:85) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87)(cid:17) (cid:135) (cid:58)(cid:72) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72) (cid:82)(cid:88)(cid:85) (cid:86)(cid:92)(cid:86)(cid:87)(cid:72)(cid:80) (cid:82)(cid:81) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:74)(cid:72)(cid:81)(cid:85)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:86)(cid:75)(cid:82)(cid:90) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:70)(cid:82)(cid:81)(cid:16) (cid:86)(cid:76)(cid:86)(cid:87)(cid:72)(cid:81)(cid:87)(cid:79)(cid:92) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:86) (cid:87)(cid:75)(cid:72) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:68)(cid:85)(cid:87) (cid:69)(cid:92) (cid:68) (cid:79)(cid:68)(cid:85)(cid:74)(cid:72) (cid:80)(cid:68)(cid:85)(cid:74)(cid:76)(cid:81) (cid:82)(cid:81) (cid:68)(cid:79)(cid:79) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:86)(cid:17) (cid:135) (cid:58)(cid:72) (cid:68)(cid:79)(cid:86)(cid:82) (cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87) (cid:72)(cid:91)(cid:83)(cid:72)(cid:85)(cid:76)(cid:80)(cid:72)(cid:81)(cid:87)(cid:68)(cid:79) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:71)(cid:72)(cid:80)(cid:82)(cid:81)(cid:86)(cid:87)(cid:85)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:89)(cid:68)(cid:85)(cid:76)(cid:82)(cid:88)(cid:86) (cid:70)(cid:82)(cid:80)(cid:16) (cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87)(cid:86) (cid:76)(cid:81) (cid:82)(cid:88)(cid:85) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:68)(cid:81)(cid:71) (cid:68)(cid:81)(cid:68)(cid:79)(cid:92)(cid:93)(cid:72) (cid:86)(cid:82)(cid:80)(cid:72) (cid:80)(cid:76)(cid:86)(cid:87)(cid:68)(cid:78)(cid:72)(cid:86) (cid:80)(cid:68)(cid:71)(cid:72) (cid:69)(cid:92) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:82)(cid:81) (cid:72)(cid:68)(cid:70)(cid:75) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:17)",
"(cid:39)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:71)(cid:72)(cid:87)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:85)(cid:76)(cid:74)(cid:76)(cid:81)(cid:68)(cid:87)(cid:72)(cid:86) (cid:73)(cid:85)(cid:82)(cid:80) (cid:40)(cid:80)(cid:83)(cid:87)(cid:92) (cid:38)(cid:68)(cid:87)(cid:72)(cid:74)(cid:82)(cid:85)(cid:92) (cid:11)(cid:40)(cid:38)(cid:12) (cid:71)(cid:72)(cid:87)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:68) (cid:87)(cid:68)(cid:86)(cid:78) (cid:68)(cid:76)(cid:80)(cid:72)(cid:71) (cid:87)(cid:82) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85) (cid:70)(cid:72)(cid:85)(cid:87)(cid:68)(cid:76)(cid:81) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:72)(cid:79)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:76)(cid:81) (cid:86)(cid:92)(cid:81)(cid:16) (cid:87)(cid:68)(cid:70)(cid:87)(cid:76)(cid:70) (cid:87)(cid:85)(cid:72)(cid:72)(cid:69)(cid:68)(cid:81)(cid:78)(cid:86) (cid:11)(cid:38)(cid:75)(cid:88)(cid:81)(cid:74) (cid:68)(cid:81)(cid:71) (cid:42)(cid:76)(cid:79)(cid:71)(cid:72)(cid:68)(cid:15) (cid:21)(cid:19)(cid:20)(cid:19)(cid:30) (cid:38)(cid:68)(cid:76) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:20)(cid:30) (cid:59)(cid:88)(cid:72) (cid:68)(cid:81)(cid:71) (cid:60)(cid:68)(cid:81)(cid:74)(cid:15) (cid:21)(cid:19)(cid:20)(cid:22)(cid:12)(cid:17) (cid:39)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:92) (cid:90)(cid:68)(cid:86) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:68)(cid:86) (cid:68)(cid:81) (cid:76)(cid:81)(cid:71)(cid:72)(cid:83)(cid:72)(cid:81)(cid:16) (cid:71)(cid:72)(cid:81)(cid:87) (cid:87)(cid:68)(cid:86)(cid:78) (cid:76)(cid:81) (cid:11)(cid:60)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:12)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:79)(cid:72)(cid:89)(cid:72)(cid:85)(cid:68)(cid:74)(cid:72)(cid:71) (cid:68) (cid:86)(cid:72)(cid:87) (cid:82)(cid:73) (cid:86)(cid:83)(cid:72)(cid:70)(cid:76)(cid:68)(cid:79)(cid:79)(cid:92) (cid:71)(cid:72)(cid:86)(cid:76)(cid:74)(cid:81)(cid:72)(cid:71) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72)(cid:86) (cid:87)(cid:82) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85) (cid:39)(cid:51)(cid:86) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:87)(cid:72)(cid:91)(cid:87) (cid:80)(cid:72)(cid:86)(cid:86)(cid:68)(cid:74)(cid:72)(cid:86)(cid:17) (cid:42)(cid:76)(cid:68)(cid:81)(cid:81)(cid:72)(cid:79)(cid:79)(cid:68) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17) (cid:11)(cid:21)(cid:19)(cid:20)(cid:26)(cid:12) (cid:72)(cid:80)(cid:16) (cid:83)(cid:79)(cid:82)(cid:92)(cid:72)(cid:71) (cid:68) (cid:79)(cid:76)(cid:81)(cid:72)(cid:68)(cid:85)(cid:16)(cid:70)(cid:75)(cid:68)(cid:76)(cid:81) (cid:38)(cid:53)(cid:41) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:72)(cid:85) (cid:87)(cid:82) (cid:77)(cid:82)(cid:76)(cid:81)(cid:87)(cid:79)(cid:92) (cid:71)(cid:72)(cid:16) (cid:87)(cid:72)(cid:85)(cid:80)(cid:76)(cid:81)(cid:72) (cid:87)(cid:75)(cid:72) (cid:83)(cid:82)(cid:86)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:83)(cid:72)(cid:85)(cid:86)(cid:82)(cid:81) (cid:81)(cid:88)(cid:80)(cid:69)(cid:72)(cid:85) (cid:82)(cid:73) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:54)(cid:48)(cid:54) (cid:80)(cid:72)(cid:86)(cid:86)(cid:68)(cid:74)(cid:72)(cid:86) (cid:88)(cid:86)(cid:76)(cid:81)(cid:74) (cid:75)(cid:68)(cid:81)(cid:71)(cid:16) (cid:70)(cid:85)(cid:68)(cid:73)(cid:87)(cid:72)(cid:71) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72)(cid:86)(cid:72) (cid:87)(cid:85)(cid:68)(cid:71)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72)(cid:16)(cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:85)(cid:72)(cid:84)(cid:88)(cid:76)(cid:85)(cid:72) (cid:75)(cid:72)(cid:68)(cid:89)(cid:92) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72) (cid:72)(cid:81)(cid:74)(cid:76)(cid:81)(cid:72)(cid:72)(cid:85)(cid:76)(cid:81)(cid:74)(cid:17) (cid:61)(cid:75)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17) (cid:11)(cid:21)(cid:19)(cid:20)(cid:25)(cid:12) (cid:73)(cid:82)(cid:85) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:87)(cid:76)(cid:80)(cid:72) (cid:88)(cid:87)(cid:76)(cid:79)(cid:76)(cid:93)(cid:72)(cid:71) (cid:68) (cid:80)(cid:88)(cid:79)(cid:87)(cid:76)(cid:16)(cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:83)(cid:72)(cid:85)(cid:70)(cid:72)(cid:83)(cid:87)(cid:85)(cid:82)(cid:81) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:87)(cid:82) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:17) (cid:40)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:76)(cid:86) (cid:72)(cid:91)(cid:83)(cid:85)(cid:72)(cid:86)(cid:86)(cid:72)(cid:71) (cid:68)(cid:86) (cid:68) (cid:70)(cid:82)(cid:81)(cid:70)(cid:68)(cid:87)(cid:72)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:72)(cid:80)(cid:16) (cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71) (cid:87)(cid:82)(cid:78)(cid:72)(cid:81)(cid:86) (cid:76)(cid:81) (cid:68) (cid:73)(cid:76)(cid:91)(cid:72)(cid:71)(cid:16)(cid:79)(cid:72)(cid:81)(cid:74)(cid:87)(cid:75) (cid:90)(cid:76)(cid:81)(cid:16) (cid:71)(cid:82)(cid:90)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:70)(cid:68)(cid:81) (cid:81)(cid:82)(cid:87) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:89)(cid:72) (cid:87)(cid:75)(cid:72) (cid:70)(cid:68)(cid:86)(cid:72)(cid:86) (cid:90)(cid:75)(cid:72)(cid:81)",
"(cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:21)(cid:29) (cid:49)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:39)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:51)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:53)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:92) (cid:41)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78)(cid:17)",
"(cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87)(cid:86) (cid:68)(cid:85)(cid:72) (cid:82)(cid:88)(cid:87)(cid:86)(cid:76)(cid:71)(cid:72) (cid:87)(cid:75)(cid:72) (cid:79)(cid:82)(cid:70)(cid:68)(cid:79) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:76)(cid:86) (cid:68) (cid:70)(cid:82)(cid:80)(cid:80)(cid:82)(cid:81) (cid:82)(cid:70)(cid:70)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:76)(cid:81) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:71)(cid:68)(cid:87)(cid:68)(cid:17) (cid:55)(cid:75)(cid:72)(cid:85)(cid:72) (cid:75)(cid:68)(cid:86) (cid:68)(cid:79)(cid:86)(cid:82) (cid:69)(cid:72)(cid:72)(cid:81) (cid:86)(cid:82)(cid:80)(cid:72) (cid:90)(cid:82)(cid:85)(cid:78) (cid:87)(cid:75)(cid:68)(cid:87) (cid:68)(cid:87)(cid:87)(cid:72)(cid:80)(cid:83)(cid:87)(cid:86) (cid:87)(cid:82) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:81) (cid:83)(cid:85)(cid:82)(cid:16)(cid:71)(cid:85)(cid:82)(cid:83) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72)(cid:86) (cid:87)(cid:82) (cid:75)(cid:72)(cid:79)(cid:83) (cid:48)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:72) (cid:55)(cid:85)(cid:68)(cid:81)(cid:86)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:11)(cid:58)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:25)(cid:68)(cid:15)(cid:69)(cid:15) (cid:21)(cid:19)(cid:20)(cid:27)(cid:12)(cid:17) (cid:55)(cid:75)(cid:72)(cid:85)(cid:72) (cid:68)(cid:85)(cid:72) (cid:86)(cid:82)(cid:80)(cid:72) (cid:76)(cid:81)(cid:75)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86) (cid:69)(cid:72)(cid:87)(cid:90)(cid:72)(cid:72)(cid:81) (cid:87)(cid:75)(cid:72)(cid:76)(cid:85) (cid:90)(cid:82)(cid:85)(cid:78) (cid:68)(cid:81)(cid:71) (cid:82)(cid:88)(cid:85)(cid:86)(cid:17) (cid:44)(cid:81) (cid:87)(cid:75)(cid:72)(cid:76)(cid:85) (cid:90)(cid:82)(cid:85)(cid:78)(cid:15) (cid:87)(cid:75)(cid:72)(cid:92) (cid:68)(cid:87)(cid:87)(cid:72)(cid:80)(cid:83)(cid:87) (cid:87)(cid:82) (cid:70)(cid:85)(cid:72)(cid:16) (cid:68)(cid:87)(cid:72) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:71)(cid:68)(cid:87)(cid:68) (cid:69)(cid:92) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:81)(cid:74) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:73)(cid:85)(cid:82)(cid:80) (cid:68)(cid:81)(cid:16) (cid:82)(cid:87)(cid:75)(cid:72)(cid:85) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:87)(cid:82) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72)(cid:15) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:86)(cid:83)(cid:72)(cid:70)(cid:76)(cid:73)(cid:76)(cid:70) (cid:79)(cid:82)(cid:70)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:68) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:80)(cid:68)(cid:92) (cid:81)(cid:82)(cid:87) (cid:80)(cid:68)(cid:87)(cid:87)(cid:72)(cid:85) (cid:68)(cid:86) (cid:80)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:79)(cid:82)(cid:81)(cid:74) (cid:68)(cid:86) (cid:76)(cid:87) (cid:76)(cid:86) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:79)(cid:68)(cid:87)(cid:72)(cid:71) (cid:70)(cid:82)(cid:85)(cid:85)(cid:72)(cid:70)(cid:87)(cid:79)(cid:92) (cid:90)(cid:75)(cid:72)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:76)(cid:86) (cid:87)(cid:75)(cid:72) (cid:86)(cid:82)(cid:88)(cid:85)(cid:70)(cid:72) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72)(cid:17) (cid:44)(cid:81) (cid:70)(cid:82)(cid:81)(cid:87)(cid:85)(cid:68)(cid:86)(cid:87)(cid:15) (cid:90)(cid:72) (cid:73)(cid:82)(cid:70)(cid:88)(cid:86) (cid:82)(cid:81) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:76)(cid:81)(cid:74) (cid:39)(cid:51)(cid:86) (cid:73)(cid:85)(cid:82)(cid:80) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:15) (cid:68)(cid:81)(cid:71) (cid:71)(cid:72)(cid:87)(cid:72)(cid:85)(cid:80)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:85)(cid:85)(cid:72)(cid:70)(cid:87) (cid:83)(cid:82)(cid:86)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:68) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:86) (cid:70)(cid:85)(cid:76)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79) (cid:83)(cid:68)(cid:85)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:87)(cid:68)(cid:86)(cid:78)(cid:17)",
"(cid:36) (cid:79)(cid:76)(cid:81)(cid:72) (cid:82)(cid:73) (cid:85)(cid:72)(cid:86)(cid:72)(cid:68)(cid:85)(cid:70)(cid:75) (cid:87)(cid:75)(cid:68)(cid:87) (cid:76)(cid:86) (cid:70)(cid:79)(cid:82)(cid:86)(cid:72)(cid:79)(cid:92) (cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:72)(cid:71) (cid:87)(cid:82) (cid:82)(cid:88)(cid:85) (cid:87)(cid:68)(cid:86)(cid:78) (cid:76)(cid:86) (cid:93)(cid:72)(cid:85)(cid:82) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:11)(cid:61)(cid:51)(cid:12) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:68)(cid:76)(cid:80)(cid:86) (cid:87)(cid:82) (cid:85)(cid:72)(cid:16) (cid:86)(cid:82)(cid:79)(cid:89)(cid:72) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72)(cid:76)(cid:85) (cid:68)(cid:81)(cid:87)(cid:72)(cid:70)(cid:72)(cid:71)(cid:72)(cid:81)(cid:87)(cid:86)(cid:17) (cid:38)(cid:82)(cid:81)(cid:16) (cid:89)(cid:72)(cid:85)(cid:86)(cid:72) (cid:68)(cid:81)(cid:71) (cid:51)(cid:68)(cid:79)(cid:80)(cid:72)(cid:85) (cid:11)(cid:21)(cid:19)(cid:19)(cid:25)(cid:12) (cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:71) (cid:68) (cid:85)(cid:88)(cid:79)(cid:72)(cid:16)(cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:87)(cid:82) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:89)(cid:72) (cid:61)(cid:51) (cid:69)(cid:92) (cid:88)(cid:87)(cid:76)(cid:79)(cid:76)(cid:93)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:43)(cid:82)(cid:69)(cid:69)(cid:86) (cid:68)(cid:79)(cid:16) (cid:74)(cid:82)(cid:85)(cid:76)(cid:87)(cid:75)(cid:80) (cid:11)(cid:43)(cid:82)(cid:69)(cid:69)(cid:86)(cid:15) (cid:20)(cid:28)(cid:26)(cid:27)(cid:12)(cid:17) (cid:47)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74)(cid:16)(cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:68)(cid:81)(cid:68)(cid:83)(cid:75)(cid:82)(cid:85)(cid:76)(cid:70) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:83)(cid:83)(cid:85)(cid:82)(cid:68)(cid:70)(cid:75)(cid:72)(cid:86) (cid:75)(cid:68)(cid:89)(cid:72) (cid:68)(cid:79)(cid:86)(cid:82) (cid:69)(cid:72)(cid:72)(cid:81) (cid:72)(cid:91)(cid:87)(cid:72)(cid:81)(cid:86)(cid:76)(cid:89)(cid:72)(cid:79)(cid:92) (cid:72)(cid:91)(cid:83)(cid:79)(cid:82)(cid:85)(cid:72)(cid:71)(cid:17) (cid:61)(cid:75)(cid:68)(cid:82) (cid:68)(cid:81)(cid:71) (cid:49)(cid:74) (cid:11)(cid:21)(cid:19)(cid:19)(cid:26)(cid:12) (cid:68)(cid:81)(cid:71) (cid:46)(cid:82)(cid:81)(cid:74) (cid:68)(cid:81)(cid:71) (cid:61)(cid:75)(cid:82)(cid:88) (cid:11)(cid:21)(cid:19)(cid:20)(cid:19)(cid:12) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:86)(cid:92)(cid:86)(cid:87)(cid:72)(cid:80)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80) (cid:61)(cid:51) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:69)(cid:92) (cid:76)(cid:81)(cid:87)(cid:72)(cid:74)(cid:85)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74) (cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:70)(cid:87)(cid:76)(cid:70) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:83)(cid:82)(cid:16) (cid:86)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:81) (cid:68) (cid:86)(cid:92)(cid:86)(cid:87)(cid:72)(cid:80) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:71)(cid:72)(cid:70)(cid:76)(cid:86)(cid:76)(cid:82)(cid:81) (cid:87)(cid:85)(cid:72)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:16)(cid:86)(cid:72)(cid:81)(cid:86)(cid:76)(cid:87)(cid:76)(cid:89)(cid:72) (cid:70)(cid:82)(cid:81)(cid:89)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:85)(cid:72)(cid:72) (cid:78)(cid:72)(cid:85)(cid:16) (cid:81)(cid:72)(cid:79)(cid:86)(cid:17) (cid:58)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72) (cid:83)(cid:82)(cid:90)(cid:72)(cid:85)(cid:73)(cid:88)(cid:79) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:70)(cid:68)(cid:83)(cid:68)(cid:70)(cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:81)(cid:72)(cid:88)(cid:16) (cid:85)(cid:68)(cid:79) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86)(cid:15) (cid:85)(cid:72)(cid:70)(cid:72)(cid:81)(cid:87) (cid:90)(cid:82)(cid:85)(cid:78) (cid:73)(cid:82)(cid:70)(cid:88)(cid:86)(cid:72)(cid:86) (cid:82)(cid:81) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:85)(cid:72)(cid:83)(cid:16) (cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:87)(cid:82) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:89)(cid:72) (cid:93)(cid:72)(cid:85)(cid:82) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:82)(cid:85) (cid:70)(cid:82)(cid:80)(cid:80)(cid:82)(cid:81)",
"(cid:81)(cid:82)(cid:88)(cid:81) (cid:83)(cid:75)(cid:85)(cid:68)(cid:86)(cid:72)(cid:86) (cid:11)(cid:49)(cid:74)(cid:15) (cid:21)(cid:19)(cid:19)(cid:26)(cid:30) (cid:60)(cid:76)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:26)(cid:15) (cid:21)(cid:19)(cid:20)(cid:27)(cid:12)(cid:17) (cid:55)(cid:75)(cid:72) (cid:39)(cid:51) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:92) (cid:87)(cid:68)(cid:86)(cid:78) (cid:72)(cid:91)(cid:83)(cid:79)(cid:82)(cid:85)(cid:72)(cid:71) (cid:76)(cid:81) (cid:82)(cid:88)(cid:85) (cid:90)(cid:82)(cid:85)(cid:78) (cid:73)(cid:82)(cid:70)(cid:88)(cid:86)(cid:72)(cid:86) (cid:82)(cid:81) (cid:71)(cid:72)(cid:87)(cid:72)(cid:85)(cid:80)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:83)(cid:82)(cid:86)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:87)(cid:92)(cid:83)(cid:72) (cid:82)(cid:73) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:90)(cid:76)(cid:87)(cid:75)(cid:82)(cid:88)(cid:87) (cid:68)(cid:87)(cid:87)(cid:72)(cid:80)(cid:83)(cid:87)(cid:76)(cid:81)(cid:74) (cid:87)(cid:82) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:89)(cid:72) (cid:76)(cid:87) (cid:87)(cid:82) (cid:68)(cid:81) (cid:68)(cid:81)(cid:16) (cid:87)(cid:72)(cid:70)(cid:72)(cid:71)(cid:72)(cid:81)(cid:87)(cid:17)",
"(cid:55)(cid:75)(cid:72) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:72)(cid:70)(cid:75)(cid:68)(cid:81)(cid:76)(cid:86)(cid:80) (cid:76)(cid:86) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:73)(cid:82)(cid:85) (cid:81)(cid:72)(cid:88)(cid:16) (cid:85)(cid:68)(cid:79) (cid:80)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:72) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:11)(cid:37)(cid:68)(cid:75)(cid:71)(cid:68)(cid:81)(cid:68)(cid:88) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:23)(cid:12) (cid:68)(cid:81)(cid:71) (cid:75)(cid:68)(cid:86) (cid:69)(cid:72)(cid:72)(cid:81) (cid:68)(cid:87)(cid:87)(cid:72)(cid:80)(cid:83)(cid:87)(cid:72)(cid:71) (cid:76)(cid:81) (cid:68) (cid:89)(cid:68)(cid:85)(cid:76)(cid:72)(cid:87)(cid:92) (cid:82)(cid:73) (cid:81)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:83)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86) (cid:11)(cid:37)(cid:68)(cid:75)(cid:71)(cid:68)(cid:81)(cid:68)(cid:88) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:23)(cid:30) (cid:43)(cid:72)(cid:85)(cid:80)(cid:68)(cid:81)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:12)(cid:17) (cid:46)(cid:76)(cid:80) (cid:11)(cid:21)(cid:19)(cid:20)(cid:26)(cid:12) (cid:72)(cid:91)(cid:87)(cid:72)(cid:81)(cid:71)(cid:72)(cid:71) (cid:87)(cid:75)(cid:72) (cid:69)(cid:68)(cid:86)(cid:76)(cid:70) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:72)(cid:70)(cid:75)(cid:68)(cid:81)(cid:76)(cid:86)(cid:80) (cid:87)(cid:82) (cid:76)(cid:81)(cid:70)(cid:82)(cid:85)(cid:83)(cid:82)(cid:85)(cid:68)(cid:87)(cid:72) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:69)(cid:76)(cid:68)(cid:86)(cid:72)(cid:86) (cid:69)(cid:92) (cid:70)(cid:82)(cid:80)(cid:69)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:74)(cid:85)(cid:68)(cid:83)(cid:75)(cid:76)(cid:70)(cid:68)(cid:79) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:81)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86)(cid:17) (cid:60)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17) (cid:11)(cid:21)(cid:19)(cid:20)(cid:25)(cid:12) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:68) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16) (cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:75)(cid:76)(cid:72)(cid:85)(cid:68)(cid:85)(cid:70)(cid:75)(cid:76)(cid:70)(cid:68)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78) (cid:73)(cid:82)(cid:85) (cid:71)(cid:82)(cid:70)(cid:88)(cid:80)(cid:72)(cid:81)(cid:87) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:59)(cid:76)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17) (cid:11)(cid:21)(cid:19)(cid:20)(cid:27)(cid:12) (cid:68)(cid:79)(cid:86)(cid:82) (cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:71) (cid:68) (cid:75)(cid:76)(cid:72)(cid:85)(cid:68)(cid:85)(cid:70)(cid:75)(cid:76)(cid:70)(cid:68)(cid:79) (cid:85)(cid:72)(cid:70)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:87) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78) (cid:11)(cid:43)(cid:53)(cid:36)(cid:49)(cid:12) (cid:87)(cid:82) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:87)(cid:75)(cid:72) (cid:75)(cid:76)(cid:72)(cid:85)(cid:68)(cid:85)(cid:70)(cid:75)(cid:92) (cid:82)(cid:73) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:76)(cid:81) (cid:82)(cid:85)(cid:71)(cid:72)(cid:85) (cid:87)(cid:82) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:87)(cid:72) (cid:80)(cid:88)(cid:79)(cid:87)(cid:76)(cid:16)(cid:87)(cid:88)(cid:85)(cid:81) (cid:85)(cid:72)(cid:86)(cid:83)(cid:82)(cid:81)(cid:86)(cid:72)(cid:86) (cid:76)(cid:81) (cid:70)(cid:75)(cid:68)(cid:87)(cid:16) (cid:69)(cid:82)(cid:87)(cid:86)(cid:17) (cid:44)(cid:81) (cid:70)(cid:82)(cid:81)(cid:87)(cid:85)(cid:68)(cid:86)(cid:87) (cid:90)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72)(cid:86)(cid:72) (cid:69)(cid:82)(cid:87)(cid:87)(cid:82)(cid:80)(cid:16)(cid:88)(cid:83) (cid:80)(cid:72)(cid:70)(cid:75)(cid:68)(cid:16) (cid:81)(cid:76)(cid:86)(cid:80)(cid:86)(cid:15) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87)(cid:86) (cid:68) (cid:87)(cid:82)(cid:83)(cid:16)(cid:71)(cid:82)(cid:90)(cid:81) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72)(cid:71) (cid:68)(cid:87)(cid:16) (cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:72)(cid:70)(cid:75)(cid:68)(cid:81)(cid:76)(cid:86)(cid:80) (cid:87)(cid:82) (cid:70)(cid:82)(cid:81)(cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:68) (cid:39)(cid:51)(cid:17) (cid:50)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:80)(cid:83)(cid:87)(cid:86) (cid:87)(cid:82) (cid:76)(cid:71)(cid:72)(cid:81)(cid:87)(cid:76)(cid:73)(cid:92) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:68) (cid:39)(cid:51)(cid:15) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72)(cid:81) (cid:73)(cid:82)(cid:70)(cid:88)(cid:86) (cid:76)(cid:81) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:76)(cid:87)(cid:86)(cid:72)(cid:79)(cid:73)(cid:17) (cid:50)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:71)(cid:85)(cid:68)(cid:90)(cid:86) (cid:76)(cid:81)(cid:86)(cid:83)(cid:76)(cid:85)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:85)(cid:82)(cid:80) (cid:87)(cid:75)(cid:72) (cid:80)(cid:72)(cid:80)(cid:82)(cid:85)(cid:92) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78) (cid:68)(cid:81)(cid:71) (cid:76)(cid:87)(cid:86) (cid:89)(cid:68)(cid:85)(cid:76)(cid:68)(cid:81)(cid:87)(cid:86) (cid:11)(cid:58)(cid:72)(cid:86)(cid:87)(cid:82)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:30) (cid:54)(cid:88)(cid:78)(cid:75)(cid:69)(cid:68)(cid:68)(cid:87)(cid:68)(cid:85) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:30) (cid:43)(cid:72)(cid:81)(cid:68)(cid:73)(cid:73) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:26)(cid:30) (cid:48)(cid:76)(cid:79)(cid:79)(cid:72)(cid:85) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:25)(cid:12)(cid:15) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:68)(cid:81) (cid:72)(cid:91)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:79) (cid:80)(cid:72)(cid:80)(cid:82)(cid:85)(cid:92) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87) (cid:76)(cid:86) (cid:88)(cid:86)(cid:72)(cid:71) (cid:87)(cid:82) (cid:86)(cid:87)(cid:82)(cid:85)(cid:72) (cid:68)(cid:81)(cid:71) (cid:88)(cid:83)(cid:71)(cid:68)(cid:87)(cid:72) (cid:78)(cid:81)(cid:82)(cid:90)(cid:79)(cid:72)(cid:71)(cid:74)(cid:72)(cid:17)",
"(cid:41)(cid:82)(cid:79)(cid:79)(cid:82)(cid:90)(cid:76)(cid:81)(cid:74) (cid:11)(cid:60)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:12)(cid:15) (cid:90)(cid:72) (cid:73)(cid:82)(cid:85)(cid:80)(cid:88)(cid:79)(cid:68)(cid:87)(cid:72) (cid:39)(cid:51) (cid:85)(cid:72)(cid:16) (cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:92) (cid:68)(cid:86) (cid:68) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:87)(cid:76)(cid:68)(cid:79) (cid:87)(cid:68)(cid:74)(cid:74)(cid:76)(cid:81)(cid:74) (cid:83)(cid:85)(cid:82)(cid:69)(cid:79)(cid:72)(cid:80)(cid:17) (cid:42)(cid:76)(cid:89)(cid:72)(cid:81) (cid:68)(cid:81) (cid:76)(cid:81)(cid:83)(cid:88)(cid:87) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) X = ( x 1 , x 2 , ..., x s ) (cid:68)(cid:81)(cid:71) (cid:76)(cid:87)(cid:86) (cid:70)(cid:82)(cid:81)(cid:16) (cid:87)(cid:72)(cid:91)(cid:87)(cid:88)(cid:68)(cid:79) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) C = ( X 1 , ..., X m ) (cid:15) (cid:82)(cid:88)(cid:85) (cid:87)(cid:68)(cid:86)(cid:78) (cid:76)(cid:86) (cid:87)(cid:82) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) P ( Y | X, C ) (cid:68)(cid:81)(cid:71) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87) (cid:68) (cid:86)(cid:72)(cid:87) (cid:82)(cid:73) (cid:79)(cid:68)(cid:69)(cid:72)(cid:79)(cid:86) Y = ( y 1 , y 2 , ..., y s ) (cid:17) (cid:40)(cid:68)(cid:70)(cid:75) (cid:72)(cid:79)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) Y (cid:76)(cid:81)(cid:71)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72)(cid:86) (cid:90)(cid:75)(cid:72)(cid:87)(cid:75)(cid:72)(cid:85) (cid:68) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:86) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:68)(cid:81)(cid:71) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:87)(cid:92)(cid:83)(cid:72) (cid:76)(cid:87) (cid:86)(cid:75)(cid:82)(cid:88)(cid:79)(cid:71) (cid:69)(cid:72) (cid:69)(cid:72)(cid:73)(cid:82)(cid:85)(cid:72) (cid:72)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) X (cid:17) (cid:55)(cid:75)(cid:72) (cid:49)(cid:39)(cid:51)(cid:53) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:70)(cid:82)(cid:81)(cid:86)(cid:76)(cid:86)(cid:87)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87)(cid:86)(cid:29) (cid:11)(cid:20)(cid:12) (cid:44)(cid:81)(cid:16) (cid:83)(cid:88)(cid:87) (cid:47)(cid:68)(cid:92)(cid:72)(cid:85)(cid:30) (cid:11)(cid:21)(cid:12) (cid:53)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:48)(cid:82)(cid:71)(cid:72)(cid:79)(cid:76)(cid:81)(cid:74) (cid:47)(cid:68)(cid:92)(cid:72)(cid:85)(cid:30) (cid:11)(cid:22)(cid:12) (cid:50)(cid:88)(cid:87)(cid:16) (cid:83)(cid:88)(cid:87) (cid:47)(cid:68)(cid:92)(cid:72)(cid:85)(cid:17) (cid:58)(cid:72) (cid:71)(cid:72)(cid:86)(cid:70)(cid:85)(cid:76)(cid:69)(cid:72) (cid:87)(cid:75)(cid:72)(cid:80) (cid:76)(cid:81) (cid:71)(cid:72)(cid:87)(cid:68)(cid:76)(cid:79) (cid:69)(cid:72)(cid:79)(cid:82)(cid:90)(cid:17)",
"(cid:39)(cid:88)(cid:85)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:72)(cid:81)(cid:70)(cid:82)(cid:71)(cid:76)(cid:81)(cid:74) (cid:83)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:15) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:83)(cid:88)(cid:87) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:73) (cid:79)(cid:72)(cid:81)(cid:74)(cid:87)(cid:75) s (cid:15) (cid:76)(cid:86) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:73)(cid:82)(cid:85)(cid:80)(cid:72)(cid:71) (cid:76)(cid:81)(cid:87)(cid:82) (cid:68) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:73) d (cid:16) (cid:71)(cid:76)(cid:80)(cid:72)(cid:81)(cid:86)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:87) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87)(cid:15) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72)(cid:81) (cid:73)(cid:72)(cid:71) (cid:76)(cid:81)(cid:87)(cid:82) (cid:87)(cid:90)(cid:82) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:78)(cid:76)(cid:81)(cid:71)(cid:86) (cid:82)(cid:73) (cid:69)(cid:76)(cid:71)(cid:76)(cid:85)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:85)(cid:72)(cid:70)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:87) (cid:81)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86) (cid:11)(cid:53)(cid:49)(cid:49)(cid:12) (cid:11)(cid:40)(cid:79)(cid:80)(cid:68)(cid:81)(cid:15) (cid:20)(cid:28)(cid:28)(cid:20)(cid:12)(cid:29)",
"(cid:135) (cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:11)(cid:37)(cid:68)(cid:75)(cid:71)(cid:68)(cid:81)(cid:68)(cid:88) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:23)(cid:12)(cid:29) (cid:40)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) x n , n { 1 , ..., s } (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:76)(cid:86) (cid:85)(cid:72)(cid:83)(cid:16) (cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:71) (cid:68)(cid:86) (cid:68) (cid:70)(cid:82)(cid:81)(cid:70)(cid:68)(cid:87)(cid:72)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:73)(cid:82)(cid:85)(cid:90)(cid:68)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:69)(cid:68)(cid:70)(cid:78)(cid:90)(cid:68)(cid:85)(cid:71) (cid:75)(cid:76)(cid:71)(cid:71)(cid:72)(cid:81) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72)(cid:86) (cid:68)(cid:86)(cid:29) h n = [ h n , h n ] (cid:17) (cid:58)(cid:72) (cid:68)(cid:76)(cid:80) (cid:87)(cid:82) (cid:72)(cid:91)(cid:83)(cid:85)(cid:72)(cid:86)(cid:86) (cid:39)(cid:51) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72) (cid:75)(cid:76)(cid:71)(cid:71)(cid:72)(cid:81) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:73)(cid:87)(cid:72)(cid:85) (cid:39)(cid:51)(cid:17) (cid:135) (cid:51)(cid:38)(cid:16)(cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:11)(cid:60)(cid:76)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:26)(cid:12)(cid:29) (cid:51)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:16) (cid:70)(cid:72)(cid:81)(cid:87)(cid:72)(cid:85)(cid:72)(cid:71) (cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:11)(cid:51)(cid:38)(cid:16)(cid:37)(cid:76)(cid:42)(cid:53)(cid:56)(cid:12) (cid:68)(cid:79)(cid:86)(cid:82) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:86) (cid:87)(cid:90)(cid:82) (cid:76)(cid:81)(cid:71)(cid:72)(cid:83)(cid:72)(cid:81)(cid:71)(cid:72)(cid:81)(cid:87) (cid:42)(cid:53)(cid:56) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) x n , n { 1 , ..., s } (cid:15) (cid:87)(cid:75)(cid:72) (cid:73)(cid:82)(cid:85)(cid:90)(cid:68)(cid:85)(cid:71) (cid:42)(cid:53)(cid:56) f (cid:72)(cid:81)(cid:70)(cid:82)(cid:71)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:72)(cid:70)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:82)(cid:73) (cid:39)(cid:51) (cid:73)(cid:85)(cid:82)(cid:80) (cid:79)(cid:72)(cid:73)(cid:87) (cid:87)(cid:82) (cid:85)(cid:76)(cid:74)(cid:75)(cid:87) (cid:68)(cid:86) h n 1 (cid:15) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:69)(cid:68)(cid:70)(cid:78)(cid:90)(cid:68)(cid:85)(cid:71) (cid:42)(cid:53)(cid:56) b (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86) (cid:87)(cid:75)(cid:72) (cid:86)(cid:88)(cid:70)(cid:70)(cid:72)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:68)(cid:86) h n (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:89)(cid:72)(cid:85)(cid:86)(cid:72) (cid:71)(cid:76)(cid:85)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:41)(cid:76)(cid:81)(cid:68)(cid:79) (cid:75)(cid:76)(cid:71)(cid:71)(cid:72)(cid:81) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) (cid:82)(cid:73) (cid:39)(cid:51) (cid:76)(cid:86) (cid:68)(cid:79)(cid:86)(cid:82) (cid:70)(cid:82)(cid:81)(cid:70)(cid:68)(cid:87)(cid:72)(cid:81)(cid:68)(cid:87)(cid:72)(cid:71) (cid:68)(cid:86) h n = [ h n , h n 1 ] (cid:17) (cid:55)(cid:75)(cid:72) (cid:76)(cid:81)(cid:87)(cid:88)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:76)(cid:86) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72) (cid:76)(cid:86) (cid:87)(cid:82) (cid:72)(cid:91)(cid:83)(cid:85)(cid:72)(cid:86)(cid:86) (cid:39)(cid:51) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72) (cid:79)(cid:68)(cid:86)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71) (cid:69)(cid:72)(cid:73)(cid:82)(cid:85)(cid:72) (cid:39)(cid:51) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:73)(cid:87)(cid:72)(cid:85) (cid:39)(cid:51)(cid:17)",
"(cid:55)(cid:75)(cid:72) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:83)(cid:85)(cid:82)(cid:89)(cid:76)(cid:71)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:81)(cid:72)(cid:70)(cid:72)(cid:86)(cid:86)(cid:68)(cid:85)(cid:92) (cid:69)(cid:68)(cid:70)(cid:78)(cid:74)(cid:85)(cid:82)(cid:88)(cid:81)(cid:71) (cid:76)(cid:81)(cid:16) (cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85) (cid:39)(cid:51)(cid:86)(cid:17) (cid:44)(cid:81) (cid:82)(cid:88)(cid:85) (cid:90)(cid:82)(cid:85)(cid:78)(cid:15) (cid:90)(cid:72) (cid:88)(cid:87)(cid:76)(cid:79)(cid:76)(cid:93)(cid:72) (cid:73)(cid:76)(cid:89)(cid:72) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:83)(cid:85)(cid:72)(cid:70)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:82)(cid:81)(cid:72) (cid:87)(cid:75)(cid:72) (cid:39)(cid:51) (cid:76)(cid:86) (cid:76)(cid:81) (cid:68)(cid:81)(cid:71) (cid:87)(cid:90)(cid:82) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:73)(cid:82)(cid:79)(cid:79)(cid:82)(cid:90)(cid:76)(cid:81)(cid:74) (cid:70)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:87) (cid:76)(cid:81)(cid:83)(cid:88)(cid:87) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) X (cid:68)(cid:86) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:38) (cid:17) (cid:55)(cid:75)(cid:72) (cid:86)(cid:76)(cid:93)(cid:72) (cid:82)(cid:73) (cid:38) (cid:76)(cid:86) (cid:71)(cid:72)(cid:87)(cid:72)(cid:85)(cid:80)(cid:76)(cid:81)(cid:72)(cid:71) (cid:72)(cid:80)(cid:16) (cid:83)(cid:76)(cid:85)(cid:76)(cid:70)(cid:68)(cid:79)(cid:79)(cid:92) (cid:86)(cid:76)(cid:81)(cid:70)(cid:72) (cid:82)(cid:88)(cid:85) (cid:86)(cid:87)(cid:68)(cid:87)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90) (cid:87)(cid:75)(cid:68)(cid:87) 97 (cid:8) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:76)(cid:81)(cid:73)(cid:72)(cid:85)(cid:85)(cid:72)(cid:71) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:16) (cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:26) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86) (cid:86)(cid:88)(cid:85)(cid:85)(cid:82)(cid:88)(cid:81)(cid:71)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)",
"(cid:76)(cid:81) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:82)(cid:70)(cid:70)(cid:88)(cid:85)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:38) (cid:83)(cid:68)(cid:86)(cid:86)(cid:72)(cid:86) (cid:87)(cid:75)(cid:85)(cid:82)(cid:88)(cid:74)(cid:75) (cid:87)(cid:75)(cid:72) (cid:86)(cid:68)(cid:80)(cid:72) (cid:72)(cid:81)(cid:70)(cid:82)(cid:71)(cid:72)(cid:85) (cid:68)(cid:86) X (cid:15) (cid:92)(cid:76)(cid:72)(cid:79)(cid:71)(cid:76)(cid:81)(cid:74) (cid:87)(cid:90)(cid:82) (cid:78)(cid:76)(cid:81)(cid:71)(cid:86) (cid:82)(cid:73) (cid:80)(cid:72)(cid:80)(cid:82)(cid:85)(cid:76)(cid:72)(cid:86)(cid:29) (cid:11)(cid:20)(cid:12) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:80)(cid:72)(cid:80)(cid:16) (cid:82)(cid:85)(cid:92)(cid:29) (cid:70)(cid:82)(cid:81)(cid:70)(cid:68)(cid:87)(cid:72)(cid:81)(cid:68)(cid:87)(cid:72)(cid:71) (cid:73)(cid:76)(cid:81)(cid:68)(cid:79) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:73)(cid:82)(cid:85)(cid:90)(cid:68)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:69)(cid:68)(cid:70)(cid:78)(cid:90)(cid:68)(cid:85)(cid:71) (cid:42)(cid:53)(cid:56) (cid:73)(cid:82)(cid:85) (cid:72)(cid:68)(cid:70)(cid:75) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:88)(cid:68)(cid:79) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72) i = { 1 , ..., m } (cid:68)(cid:86) cs i = [ cs i , cs i ] (cid:15) (cid:75)(cid:82)(cid:79)(cid:71)(cid:76)(cid:81)(cid:74) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:69)(cid:68)(cid:70)(cid:78)(cid:74)(cid:85)(cid:82)(cid:88)(cid:81)(cid:71) (cid:78)(cid:81)(cid:82)(cid:90)(cid:79)(cid:72)(cid:71)(cid:74)(cid:72)(cid:17) (cid:11)(cid:21)(cid:12) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:80)(cid:72)(cid:80)(cid:16) (cid:82)(cid:85)(cid:92)(cid:29) (cid:68) (cid:86)(cid:72)(cid:87) (cid:82)(cid:73) (cid:70)(cid:82)(cid:81)(cid:70)(cid:68)(cid:87)(cid:72)(cid:81)(cid:68)(cid:87)(cid:72)(cid:71) (cid:75)(cid:76)(cid:71)(cid:71)(cid:72)(cid:81) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72)(cid:86) (cid:68)(cid:87) (cid:72)(cid:68)(cid:70)(cid:75) (cid:87)(cid:76)(cid:80)(cid:72) (cid:86)(cid:87)(cid:72)(cid:83) j = { 1 , ..., k } (cid:68)(cid:86) cw i,j = [ cw i,j , cw i,j ] (cid:15) (cid:72)(cid:91)(cid:83)(cid:85)(cid:72)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:88)(cid:68)(cid:79) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:76)(cid:81) (cid:80)(cid:72)(cid:80)(cid:82)(cid:85)(cid:92)(cid:17)",
"(cid:55)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:76)(cid:81)(cid:74) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:87)(cid:72)(cid:85)(cid:68)(cid:70)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81) (cid:69)(cid:72)(cid:87)(cid:90)(cid:72)(cid:72)(cid:81) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:83)(cid:88)(cid:87) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) X (cid:68)(cid:81)(cid:71) (cid:76)(cid:87)(cid:86) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) C (cid:68)(cid:81)(cid:71) (cid:82)(cid:88)(cid:87)(cid:83)(cid:88)(cid:87)(cid:86) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:39)(cid:51) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72) (cid:85)(cid:72)(cid:79)(cid:72)(cid:89)(cid:68)(cid:81)(cid:87) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:86) (cid:85)(cid:72)(cid:87)(cid:85)(cid:76)(cid:72)(cid:89)(cid:72)(cid:71) (cid:69)(cid:92) (cid:68) (cid:87)(cid:90)(cid:82)(cid:16)(cid:86)(cid:87)(cid:72)(cid:83) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:16)(cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:76)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:80)(cid:72)(cid:70)(cid:75)(cid:68)(cid:81)(cid:76)(cid:86)(cid:80) (cid:76)(cid:81) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:87)(cid:72)(cid:85)(cid:80)(cid:72)(cid:71)(cid:76)(cid:68)(cid:87)(cid:72) (cid:85)(cid:72)(cid:16) (cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:82)(cid:73) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:86) (cid:88)(cid:87)(cid:76)(cid:79)(cid:76)(cid:93)(cid:72)(cid:71) (cid:87)(cid:82) (cid:86)(cid:88)(cid:83)(cid:16) (cid:83)(cid:82)(cid:85)(cid:87) (cid:86)(cid:88)(cid:69)(cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:76)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72)(cid:17)",
"(cid:55)(cid:75)(cid:76)(cid:86) (cid:82)(cid:83)(cid:72)(cid:85)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:76)(cid:80)(cid:86) (cid:87)(cid:82) (cid:73)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:82)(cid:88)(cid:87) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:80)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:87)(cid:72)(cid:81)(cid:71)(cid:72)(cid:71) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:39)(cid:51)(cid:17) (cid:58)(cid:72) (cid:70)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:79)(cid:72)(cid:89)(cid:68)(cid:81)(cid:70)(cid:72) (cid:69)(cid:72)(cid:87)(cid:90)(cid:72)(cid:72)(cid:81) (cid:87)(cid:75)(cid:72) (cid:75)(cid:76)(cid:71)(cid:71)(cid:72)(cid:81) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72)(cid:86) h n (cid:85)(cid:72)(cid:83)(cid:16) (cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:39)(cid:51) (cid:68)(cid:81)(cid:71) (cid:76)(cid:87)(cid:86) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:88)(cid:68)(cid:79) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) cs i (cid:15) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:76)(cid:81)(cid:74) (cid:76)(cid:81) rs n,i (cid:17) (cid:36)(cid:73)(cid:87)(cid:72)(cid:85) (cid:83)(cid:68)(cid:86)(cid:86)(cid:16) (cid:76)(cid:81)(cid:74) (cid:76)(cid:87) (cid:87)(cid:75)(cid:85)(cid:82)(cid:88)(cid:74)(cid:75) (cid:86)(cid:82)(cid:73)(cid:87)(cid:80)(cid:68)(cid:91)(cid:15) (cid:90)(cid:72) (cid:82)(cid:69)(cid:87)(cid:68)(cid:76)(cid:81) (cid:68) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:71)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) as n,i (cid:68)(cid:86)(cid:29)",
"(cid:58)(cid:72) (cid:87)(cid:75)(cid:72)(cid:81) (cid:70)(cid:82)(cid:81)(cid:70)(cid:79)(cid:88)(cid:71)(cid:72) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:69)(cid:68)(cid:70)(cid:78)(cid:74)(cid:85)(cid:82)(cid:88)(cid:81)(cid:71) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) s n (cid:82)(cid:73) (cid:39)(cid:51) (cid:68)(cid:86) (cid:68) (cid:90)(cid:72)(cid:76)(cid:74)(cid:75)(cid:87)(cid:72)(cid:71) (cid:86)(cid:88)(cid:80) (cid:82)(cid:73) (cid:70)(cid:82)(cid:81)(cid:16) (cid:87)(cid:72)(cid:91)(cid:87)(cid:88)(cid:68)(cid:79) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:29)",
"s n = m (cid:3) i =1 as n,i cs i",
"(cid:55)(cid:75)(cid:76)(cid:86) (cid:82)(cid:83)(cid:72)(cid:85)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:88)(cid:83)(cid:71)(cid:68)(cid:87)(cid:72)(cid:86) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:39)(cid:51) (cid:69)(cid:92) (cid:70)(cid:82)(cid:80)(cid:69)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:82)(cid:85)(cid:76)(cid:74)(cid:76)(cid:81)(cid:68)(cid:79) (cid:39)(cid:51) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) h n (cid:90)(cid:76)(cid:87)(cid:75) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:69)(cid:68)(cid:70)(cid:78)(cid:74)(cid:85)(cid:82)(cid:88)(cid:81)(cid:71) (cid:78)(cid:81)(cid:82)(cid:90)(cid:79)(cid:72)(cid:71)(cid:74)(cid:72) s n (cid:87)(cid:75)(cid:85)(cid:82)(cid:88)(cid:74)(cid:75) (cid:68) (cid:79)(cid:76)(cid:81)(cid:72)(cid:68)(cid:85) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:86)(cid:29)",
"(cid:55)(cid:75)(cid:72) (cid:88)(cid:83)(cid:71)(cid:68)(cid:87)(cid:72)(cid:71) (cid:39)(cid:51) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) hs n (cid:90)(cid:76)(cid:79)(cid:79) (cid:69)(cid:72) (cid:88)(cid:86)(cid:72)(cid:71) (cid:76)(cid:81) (cid:86)(cid:88)(cid:69)(cid:86)(cid:72)(cid:16) (cid:84)(cid:88)(cid:72)(cid:81)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71) (cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:76)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72)(cid:17)",
"(cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:22)(cid:29) (cid:38)(cid:82)(cid:88)(cid:81)(cid:87)(cid:86) (cid:82)(cid:73) (cid:72)(cid:68)(cid:70)(cid:75) (cid:87)(cid:92)(cid:83)(cid:72) (cid:82)(cid:73) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:76)(cid:81) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:71)(cid:68)(cid:87)(cid:68) (cid:86)(cid:72)(cid:87)(cid:86)(cid:17) (cid:51)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:75)(cid:68)(cid:89)(cid:72) (cid:68) (cid:80)(cid:82)(cid:85)(cid:72) (cid:69)(cid:68)(cid:79)(cid:68)(cid:81)(cid:70)(cid:72)(cid:71) (cid:71)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:54)(cid:48)(cid:54) (cid:68)(cid:81)(cid:71) (cid:87)(cid:72)(cid:79)(cid:72)(cid:83)(cid:75)(cid:82)(cid:81)(cid:72) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:11)(cid:55)(cid:38)(cid:12)(cid:17) (cid:41)(cid:82)(cid:85) (cid:37)(cid:68)(cid:76)(cid:71)(cid:88)(cid:61)(cid:75)(cid:76)(cid:71)(cid:68)(cid:82)(cid:15) (cid:82)(cid:81)(cid:79)(cid:92) (cid:70)(cid:82)(cid:81)(cid:70)(cid:85)(cid:72)(cid:87)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:68)(cid:85)(cid:72) (cid:68)(cid:81)(cid:81)(cid:82)(cid:87)(cid:68)(cid:87)(cid:72)(cid:71) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:71)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72)(cid:86)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:76)(cid:86) (cid:72)(cid:91)(cid:87)(cid:85)(cid:72)(cid:80)(cid:72)(cid:79)(cid:92) (cid:88)(cid:81)(cid:72)(cid:89)(cid:72)(cid:81)(cid:17)",
"(cid:55)(cid:75)(cid:76)(cid:86) (cid:82)(cid:83)(cid:72)(cid:85)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:76)(cid:80)(cid:86) (cid:87)(cid:82) (cid:70)(cid:68)(cid:83)(cid:87)(cid:88)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:86) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:87)(cid:72)(cid:81)(cid:71)(cid:72)(cid:71) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:39)(cid:51)(cid:17) (cid:58)(cid:72) (cid:70)(cid:68)(cid:86)(cid:87) (cid:87)(cid:75)(cid:72) (cid:88)(cid:83)(cid:71)(cid:68)(cid:87)(cid:72)(cid:71) (cid:39)(cid:51) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) hs n (cid:82)(cid:81)(cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:86)(cid:83)(cid:68)(cid:70)(cid:72) (cid:82)(cid:73) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:88)(cid:68)(cid:79) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) cw i,j (cid:69)(cid:92) (cid:80)(cid:72)(cid:68)(cid:86)(cid:88)(cid:85)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85)(cid:16) (cid:76)(cid:87)(cid:92) rw n,i,j (cid:69)(cid:72)(cid:87)(cid:90)(cid:72)(cid:72)(cid:81) (cid:87)(cid:75)(cid:72)(cid:80)(cid:17) (cid:55)(cid:75)(cid:72) (cid:70)(cid:68)(cid:86)(cid:87)(cid:76)(cid:81)(cid:74) (cid:82)(cid:83)(cid:72)(cid:85)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:79)(cid:86)(cid:82) (cid:86)(cid:72)(cid:85)(cid:89)(cid:72)(cid:86) (cid:68)(cid:86) (cid:68) (cid:85)(cid:72)(cid:74)(cid:88)(cid:79)(cid:68)(cid:85)(cid:76)(cid:93)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:72)(cid:68)(cid:86)(cid:88)(cid:85)(cid:72) (cid:87)(cid:82) (cid:85)(cid:72)(cid:86)(cid:87)(cid:85)(cid:76)(cid:70)(cid:87) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:86)(cid:83)(cid:68)(cid:70)(cid:72)(cid:17) (cid:55)(cid:75)(cid:72) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85)(cid:76)(cid:87)(cid:76)(cid:72)(cid:86) (cid:68)(cid:85)(cid:72) (cid:73)(cid:72)(cid:71) (cid:76)(cid:81)(cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:86)(cid:82)(cid:73)(cid:87)(cid:80)(cid:68)(cid:91) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82) (cid:74)(cid:76)(cid:89)(cid:72) (cid:68) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:71)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) aw n,i,j (cid:73)(cid:82)(cid:85) (cid:72)(cid:68)(cid:70)(cid:75) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:88)(cid:68)(cid:79) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:86)(cid:29)",
"(cid:55)(cid:75)(cid:72) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:71)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) aw n,i,j (cid:68)(cid:79)(cid:86)(cid:82) (cid:83)(cid:85)(cid:82)(cid:89)(cid:76)(cid:71)(cid:72)(cid:86) (cid:68)(cid:81) (cid:76)(cid:81)(cid:87)(cid:72)(cid:85)(cid:83)(cid:85)(cid:72)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:39)(cid:51) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:92) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:76)(cid:87) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:71)(cid:86) (cid:87)(cid:82)(cid:17)",
"(cid:41)(cid:76)(cid:81)(cid:68)(cid:79)(cid:79)(cid:92)(cid:15) (cid:90)(cid:72) (cid:71)(cid:72)(cid:85)(cid:76)(cid:89)(cid:72) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81) w n (cid:73)(cid:85)(cid:82)(cid:80) (cid:90)(cid:82)(cid:85)(cid:71) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72)(cid:86) cw i,j (cid:17) (cid:58)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:16) (cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) aw n,i,j (cid:76)(cid:86) (cid:88)(cid:86)(cid:72)(cid:71) (cid:87)(cid:82) (cid:70)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:72) (cid:68) (cid:90)(cid:72)(cid:76)(cid:74)(cid:75)(cid:87)(cid:72)(cid:71) (cid:86)(cid:88)(cid:80) (cid:82)(cid:73) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:88)(cid:68)(cid:79) (cid:90)(cid:82)(cid:85)(cid:71) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72)(cid:86) cw i,j (cid:82)(cid:73) (cid:72)(cid:68)(cid:70)(cid:75) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72)(cid:15) (cid:92)(cid:76)(cid:72)(cid:79)(cid:71)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) tw n,i (cid:17) (cid:55)(cid:75)(cid:72)(cid:81)(cid:15) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) as i (cid:73)(cid:88)(cid:85)(cid:87)(cid:75)(cid:72)(cid:85) (cid:73)(cid:76)(cid:79)(cid:87)(cid:72)(cid:85)(cid:86) (cid:82)(cid:88)(cid:87) (cid:76)(cid:85)(cid:16) (cid:85)(cid:72)(cid:79)(cid:72)(cid:89)(cid:68)(cid:81)(cid:87) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87)(cid:86)(cid:15) (cid:92)(cid:76)(cid:72)(cid:79)(cid:71)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:81)(cid:68)(cid:79) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:85)(cid:72)(cid:83)(cid:16) (cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) w n (cid:68)(cid:86)(cid:29)",
"tw n,i = k (cid:3) j =1 aw n,i,j cw i,j (cid:11)(cid:26)(cid:12) w n = m (cid:3) i =1 as n,i tw n,i (cid:11)(cid:27)(cid:12)",
"(cid:55)(cid:75)(cid:72) (cid:82)(cid:88)(cid:87)(cid:83)(cid:88)(cid:87) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87)(cid:86) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:92) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:39)(cid:51) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) h n (cid:68)(cid:81)(cid:71) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) w n (cid:17) (cid:58)(cid:72) (cid:73)(cid:72)(cid:72)(cid:71) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:81)(cid:70)(cid:68)(cid:87)(cid:72)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72)(cid:86)(cid:72) (cid:87)(cid:90)(cid:82) (cid:83)(cid:68)(cid:85)(cid:87)(cid:86) (cid:76)(cid:81)(cid:87)(cid:82) (cid:68) (cid:21)(cid:16) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:73)(cid:88)(cid:79)(cid:79)(cid:92) (cid:70)(cid:82)(cid:81)(cid:81)(cid:72)(cid:70)(cid:87)(cid:72)(cid:71) (cid:86)(cid:82)(cid:73)(cid:87)(cid:80)(cid:68)(cid:91) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:72)(cid:85) (cid:87)(cid:82) (cid:74)(cid:76)(cid:89)(cid:72) (cid:68) (cid:70)(cid:68)(cid:87)(cid:72)(cid:74)(cid:82)(cid:85)(cid:76)(cid:70)(cid:68)(cid:79) (cid:83)(cid:85)(cid:82)(cid:69)(cid:68)(cid:69)(cid:76)(cid:79)(cid:76)(cid:87)(cid:92) (cid:71)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:89)(cid:72)(cid:85) (cid:20)(cid:26) (cid:70)(cid:68)(cid:81)(cid:71)(cid:76)(cid:16) (cid:71)(cid:68)(cid:87)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:68)(cid:86)(cid:29)",
"(cid:58)(cid:72) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:69)(cid:92) (cid:80)(cid:76)(cid:81)(cid:76)(cid:80)(cid:76)(cid:93)(cid:76)(cid:81)(cid:74) (cid:70)(cid:85)(cid:82)(cid:86)(cid:86)(cid:16) (cid:72)(cid:81)(cid:87)(cid:85)(cid:82)(cid:83)(cid:92) (cid:69)(cid:72)(cid:87)(cid:90)(cid:72)(cid:72)(cid:81) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87)(cid:72)(cid:71) (cid:79)(cid:68)(cid:69)(cid:72)(cid:79) (cid:71)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:68)(cid:81)(cid:81)(cid:82)(cid:87)(cid:68)(cid:87)(cid:72)(cid:71) (cid:79)(cid:68)(cid:69)(cid:72)(cid:79)(cid:86) (cid:73)(cid:82)(cid:85) (cid:68)(cid:79)(cid:79) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:82)(cid:69)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72) (cid:76)(cid:86) (cid:71)(cid:72)(cid:73)(cid:76)(cid:81)(cid:72)(cid:71) (cid:68)(cid:86)(cid:29)",
"(cid:90)(cid:75)(cid:72)(cid:85)(cid:72) N (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:86) (cid:68)(cid:79)(cid:79) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:76)(cid:81)(cid:86)(cid:87)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86)(cid:15) s (cid:85)(cid:72)(cid:83)(cid:16) (cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:86) (cid:87)(cid:75)(cid:72) (cid:81)(cid:88)(cid:80)(cid:69)(cid:72)(cid:85) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:76)(cid:81) (cid:72)(cid:68)(cid:70)(cid:75) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:30) ( y n | x n , c ) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:86) (cid:87)(cid:75)(cid:72) (cid:68)(cid:81)(cid:81)(cid:82)(cid:87)(cid:68)(cid:87)(cid:72)(cid:71) (cid:79)(cid:68)(cid:69)(cid:72)(cid:79) (cid:82)(cid:73) x n (cid:17)",
"(cid:58)(cid:72) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80) (cid:72)(cid:91)(cid:83)(cid:72)(cid:85)(cid:76)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:82)(cid:81) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:16) (cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:86)(cid:17)",
"(cid:135) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:87)(cid:72)(cid:91)(cid:87) (cid:80)(cid:72)(cid:86)(cid:86)(cid:68)(cid:74)(cid:72) (cid:11)(cid:54)(cid:48)(cid:54)(cid:12) (cid:71)(cid:68)(cid:87)(cid:68)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:70)(cid:82)(cid:81)(cid:16) (cid:86)(cid:76)(cid:86)(cid:87)(cid:86) (cid:82)(cid:73) (cid:25)(cid:27)(cid:23) (cid:54)(cid:48)(cid:54)(cid:18)(cid:38)(cid:75)(cid:68)(cid:87) (cid:73)(cid:76)(cid:79)(cid:72)(cid:86)(cid:17) (cid:58)(cid:72) (cid:88)(cid:86)(cid:72) (cid:87)(cid:75)(cid:72) (cid:86)(cid:68)(cid:80)(cid:72) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87) (cid:86)(cid:83)(cid:79)(cid:76)(cid:87) (cid:68)(cid:86) (cid:11)(cid:60)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:12)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:85)(cid:72)(cid:16) (cid:86)(cid:72)(cid:85)(cid:89)(cid:72)(cid:86) 16 .",
"7 (cid:8) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:86)(cid:72)(cid:87) (cid:68)(cid:86) (cid:68) (cid:75)(cid:72)(cid:79)(cid:71)(cid:16)(cid:82)(cid:88)(cid:87) (cid:71)(cid:72)(cid:89)(cid:72)(cid:79)(cid:82)(cid:83)(cid:80)(cid:72)(cid:81)(cid:87) (cid:86)(cid:72)(cid:87) (cid:87)(cid:82) (cid:87)(cid:88)(cid:81)(cid:72) (cid:87)(cid:75)(cid:72) (cid:75)(cid:92)(cid:83)(cid:72)(cid:85)(cid:16)(cid:83)(cid:68)(cid:85)(cid:68)(cid:80)(cid:72)(cid:87)(cid:72)(cid:85)(cid:86) (cid:68)(cid:81)(cid:71) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:82)(cid:81) (cid:68) (cid:86)(cid:72)(cid:83)(cid:68)(cid:85)(cid:68)(cid:87)(cid:72) (cid:87)(cid:72)(cid:86)(cid:87) (cid:86)(cid:72)(cid:87)(cid:17) 897 (cid:135) (cid:50)(cid:81)(cid:87)(cid:82)(cid:49)(cid:82)(cid:87)(cid:72)(cid:86) (cid:53)(cid:72)(cid:79)(cid:72)(cid:68)(cid:86)(cid:72) (cid:24)(cid:17)(cid:19)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:76)(cid:86) (cid:88)(cid:86)(cid:72)(cid:71) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:38)(cid:82)(cid:49)(cid:47)(cid:47) (cid:21)(cid:19)(cid:20)(cid:21) (cid:54)(cid:75)(cid:68)(cid:85)(cid:72)(cid:71) (cid:55)(cid:68)(cid:86)(cid:78)(cid:17) (cid:58)(cid:72) (cid:88)(cid:86)(cid:72) (cid:68) (cid:83)(cid:82)(cid:85)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:50)(cid:81)(cid:87)(cid:82)(cid:49)(cid:82)(cid:87)(cid:72)(cid:86) (cid:24)(cid:17)(cid:19) (cid:87)(cid:75)(cid:68)(cid:87) (cid:70)(cid:82)(cid:81)(cid:86)(cid:76)(cid:86)(cid:87)(cid:86) (cid:82)(cid:73) (cid:87)(cid:85)(cid:68)(cid:81)(cid:16) (cid:86)(cid:70)(cid:85)(cid:76)(cid:83)(cid:87)(cid:86) (cid:82)(cid:73) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:87)(cid:72)(cid:79)(cid:72)(cid:83)(cid:75)(cid:82)(cid:81)(cid:72) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:11)(cid:55)(cid:38)(cid:12) (cid:86)(cid:83)(cid:72)(cid:72)(cid:70)(cid:75)(cid:17) (cid:55)(cid:75)(cid:72) (cid:28)(cid:15)(cid:24)(cid:19)(cid:26)(cid:16)(cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:86)(cid:88)(cid:69)(cid:86)(cid:72)(cid:87) (cid:82)(cid:73) (cid:50)(cid:81)(cid:87)(cid:82)(cid:49)(cid:82)(cid:87)(cid:72)(cid:86) (cid:82)(cid:81)(cid:79)(cid:92) (cid:75)(cid:68)(cid:86) (cid:70)(cid:82)(cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:68)(cid:81)(cid:81)(cid:82)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:73)(cid:82)(cid:85) (cid:68)(cid:81)(cid:68)(cid:83)(cid:75)(cid:82)(cid:85)(cid:76)(cid:70) (cid:93)(cid:72)(cid:85)(cid:82) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:17) (cid:44)(cid:81) (cid:82)(cid:85)(cid:71)(cid:72)(cid:85) (cid:87)(cid:82) (cid:73)(cid:82)(cid:85)(cid:16) (cid:80)(cid:88)(cid:79)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:87)(cid:68)(cid:86)(cid:78) (cid:68)(cid:86) (cid:68)(cid:81) (cid:72)(cid:81)(cid:71)(cid:16)(cid:87)(cid:82)(cid:16)(cid:72)(cid:81)(cid:71) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:79)(cid:68)(cid:69)(cid:72)(cid:79)(cid:76)(cid:81)(cid:74) (cid:87)(cid:68)(cid:86)(cid:78)(cid:15) (cid:90)(cid:72) (cid:68)(cid:81)(cid:81)(cid:82)(cid:87)(cid:68)(cid:87)(cid:72) (cid:68)(cid:79)(cid:79) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:73)(cid:82)(cid:79)(cid:79)(cid:82)(cid:90)(cid:76)(cid:81)(cid:74) (cid:68)(cid:81)(cid:81)(cid:82)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:74)(cid:88)(cid:76)(cid:71)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72)(cid:86) (cid:71)(cid:72)(cid:16) (cid:86)(cid:70)(cid:85)(cid:76)(cid:69)(cid:72)(cid:71) (cid:76)(cid:81) (cid:11)(cid:60)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:12)(cid:17) (cid:135) (cid:37)(cid:68)(cid:76)(cid:71)(cid:88)(cid:61)(cid:75)(cid:76)(cid:71)(cid:68)(cid:82)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:76)(cid:86) (cid:68) (cid:84)(cid:88)(cid:72)(cid:86)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:86)(cid:90)(cid:72)(cid:85)(cid:76)(cid:81)(cid:74) (cid:71)(cid:76)(cid:68)(cid:79)(cid:82)(cid:74)(cid:88)(cid:72) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87) (cid:88)(cid:86)(cid:72)(cid:71) (cid:76)(cid:81) (cid:11)(cid:61)(cid:75)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:25)(cid:12)(cid:17) (cid:44)(cid:87) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:86) (cid:20)(cid:20)(cid:15)(cid:20)(cid:25)(cid:19) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:68)(cid:85)(cid:72) (cid:68)(cid:81)(cid:81)(cid:82)(cid:16) (cid:87)(cid:68)(cid:87)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:20)(cid:19) (cid:87)(cid:92)(cid:83)(cid:72)(cid:86) (cid:82)(cid:73) (cid:70)(cid:82)(cid:81)(cid:70)(cid:85)(cid:72)(cid:87)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:17) (cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:22) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:87)(cid:75)(cid:72) (cid:86)(cid:87)(cid:68)(cid:87)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86) (cid:82)(cid:73) (cid:72)(cid:68)(cid:70)(cid:75) (cid:87)(cid:92)(cid:83)(cid:72) (cid:82)(cid:73) (cid:83)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:71)(cid:68)(cid:87)(cid:68) (cid:86)(cid:72)(cid:87)(cid:86)(cid:17) (cid:36)(cid:70)(cid:70)(cid:82)(cid:85)(cid:71)(cid:76)(cid:81)(cid:74) (cid:87)(cid:82) (cid:11)(cid:60)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:12)(cid:15) (cid:24) (cid:82)(cid:88)(cid:87) (cid:82)(cid:73) (cid:20)(cid:24) (cid:87)(cid:92)(cid:83)(cid:72)(cid:86) (cid:82)(cid:73) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:54)(cid:48)(cid:54) (cid:71)(cid:68)(cid:87)(cid:68) (cid:86)(cid:72)(cid:87) (cid:68)(cid:85)(cid:72) (cid:68)(cid:69)(cid:86)(cid:87)(cid:85)(cid:68)(cid:70)(cid:87) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:71)(cid:82) (cid:81)(cid:82)(cid:87) (cid:70)(cid:82)(cid:85)(cid:85)(cid:72)(cid:86)(cid:83)(cid:82)(cid:81)(cid:71) (cid:87)(cid:82) (cid:68)(cid:81)(cid:92) (cid:68)(cid:70)(cid:87)(cid:88)(cid:68)(cid:79) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72)(cid:17) (cid:55)(cid:75)(cid:72)(cid:86)(cid:72) (cid:70)(cid:68)(cid:81) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85) (cid:87)(cid:82) (cid:68)(cid:81) (cid:72)(cid:89)(cid:72)(cid:81)(cid:87) (cid:11)(cid:40)(cid:89)(cid:72)(cid:81)(cid:87)(cid:12)(cid:15) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:72)(cid:89)(cid:76)(cid:82)(cid:88)(cid:86) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72) (cid:11)(cid:51)(cid:85)(cid:72)(cid:89)(cid:76)(cid:82)(cid:88)(cid:86) (cid:56)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72)(cid:12)(cid:15) (cid:82)(cid:85) (cid:68)(cid:81) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:76)(cid:70) (cid:82)(cid:85) (cid:88)(cid:81)(cid:86)(cid:83)(cid:72)(cid:70)(cid:76)(cid:73)(cid:76)(cid:70) (cid:72)(cid:81)(cid:87)(cid:76)(cid:87)(cid:92) (cid:11)(cid:42)(cid:72)(cid:81)(cid:72)(cid:85)(cid:76)(cid:70)(cid:12)(cid:17) (cid:55)(cid:75)(cid:72) (cid:82)(cid:87)(cid:75)(cid:72)(cid:85) (cid:87)(cid:90)(cid:82) (cid:68)(cid:69)(cid:86)(cid:87)(cid:85)(cid:68)(cid:70)(cid:87) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:68)(cid:85)(cid:72) (cid:88)(cid:86)(cid:72)(cid:71) (cid:87)(cid:82) (cid:76)(cid:81)(cid:71)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:86)(cid:88)(cid:69)(cid:77)(cid:72)(cid:70)(cid:87) (cid:82)(cid:73) (cid:68)(cid:81) (cid:72)(cid:91)(cid:16) (cid:76)(cid:86)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:68)(cid:79) (cid:70)(cid:82)(cid:81)(cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:11)(cid:40)(cid:91)(cid:76)(cid:86)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:68)(cid:79)(cid:12) (cid:82)(cid:85) (cid:68) (cid:83)(cid:79)(cid:72)(cid:82)(cid:81)(cid:68)(cid:86)(cid:87)(cid:76)(cid:70) (cid:86)(cid:88)(cid:69)(cid:77)(cid:72)(cid:70)(cid:87) (cid:11)(cid:51)(cid:79)(cid:72)(cid:82)(cid:81)(cid:68)(cid:86)(cid:87)(cid:76)(cid:70)(cid:12)(cid:17) (cid:55)(cid:75)(cid:72) (cid:82)(cid:87)(cid:75)(cid:72)(cid:85) (cid:87)(cid:72)(cid:81) (cid:87)(cid:92)(cid:83)(cid:72)(cid:86) (cid:82)(cid:73) (cid:38)(cid:75)(cid:76)(cid:16) (cid:81)(cid:72)(cid:86)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:68)(cid:85)(cid:72) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:72)(cid:71) (cid:68)(cid:86) (cid:70)(cid:82)(cid:81)(cid:70)(cid:85)(cid:72)(cid:87)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72) (cid:86)(cid:68)(cid:80)(cid:72) (cid:20)(cid:24) (cid:87)(cid:92)(cid:83)(cid:72)(cid:86) (cid:82)(cid:73) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:68)(cid:85)(cid:72) (cid:88)(cid:86)(cid:72)(cid:71) (cid:87)(cid:82) (cid:68)(cid:81)(cid:81)(cid:82)(cid:16) (cid:87)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:55)(cid:38) (cid:71)(cid:68)(cid:87)(cid:68)(cid:17) (cid:58)(cid:72) (cid:70)(cid:68)(cid:81) (cid:86)(cid:72)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:87)(cid:75)(cid:72) (cid:37)(cid:68)(cid:76)(cid:39)(cid:88)(cid:61)(cid:75)(cid:76)(cid:71)(cid:68)(cid:82) (cid:71)(cid:68)(cid:87)(cid:68) (cid:86)(cid:72)(cid:87) (cid:82)(cid:81)(cid:79)(cid:92) (cid:75)(cid:68)(cid:86) (cid:70)(cid:82)(cid:81)(cid:70)(cid:85)(cid:72)(cid:87)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:17) (cid:23)(cid:17)(cid:21) (cid:40)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:72)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86) (cid:58)(cid:72) (cid:88)(cid:86)(cid:72) (cid:87)(cid:75)(cid:72) (cid:86)(cid:68)(cid:80)(cid:72) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:72)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86) (cid:68)(cid:86) (cid:11)(cid:60)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:12)(cid:29) (cid:83)(cid:85)(cid:72)(cid:70)(cid:76)(cid:86)(cid:76)(cid:82)(cid:81) (cid:11)(cid:51)(cid:12)(cid:15) (cid:85)(cid:72)(cid:70)(cid:68)(cid:79)(cid:79) (cid:11)(cid:53)(cid:12) (cid:68)(cid:81)(cid:71) (cid:41)(cid:16)(cid:86)(cid:70)(cid:82)(cid:85)(cid:72) (cid:11)(cid:41)(cid:12)(cid:17) (cid:23)(cid:17)(cid:22) (cid:55)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:71)(cid:72)(cid:87)(cid:68)(cid:76)(cid:79)(cid:86) (cid:50)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:76)(cid:86) (cid:76)(cid:80)(cid:83)(cid:79)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:55)(cid:72)(cid:81)(cid:86)(cid:82)(cid:85)(cid:73)(cid:79)(cid:82)(cid:90)(cid:17) (cid:55)(cid:75)(cid:72) (cid:89)(cid:82)(cid:70)(cid:68)(cid:69)(cid:88)(cid:79)(cid:68)(cid:85)(cid:92) (cid:76)(cid:86) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:87)(cid:72)(cid:71) (cid:73)(cid:85)(cid:82)(cid:80) (cid:87)(cid:75)(cid:72) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:86)(cid:72)(cid:87)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:86) (cid:20)(cid:26)(cid:15)(cid:20)(cid:28)(cid:28) (cid:90)(cid:82)(cid:85)(cid:71) (cid:87)(cid:92)(cid:83)(cid:72)(cid:86)(cid:17) (cid:50)(cid:88)(cid:87)(cid:16)(cid:82)(cid:73)(cid:16) (cid:89)(cid:82)(cid:70)(cid:68)(cid:69)(cid:88)(cid:79)(cid:68)(cid:85)(cid:92) (cid:11)(cid:50)(cid:50)(cid:57)(cid:12) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:68)(cid:85)(cid:72) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:71) (cid:68)(cid:86) (cid:56)(cid:49)(cid:46)(cid:17) (cid:55)(cid:75)(cid:72) (cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:72)(cid:81)(cid:70)(cid:82)(cid:71)(cid:72)(cid:85) (cid:88)(cid:86)(cid:72)(cid:86) (cid:68) (cid:75)(cid:76)(cid:71)(cid:71)(cid:72)(cid:81) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:82)(cid:73) (cid:20)(cid:24)(cid:19) (cid:88)(cid:81)(cid:76)(cid:87)(cid:86)(cid:17) (cid:55)(cid:82) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:15) (cid:90)(cid:72) (cid:88)(cid:86)(cid:72) (cid:87)(cid:75)(cid:72) (cid:36)(cid:71)(cid:68)(cid:80) (cid:82)(cid:83)(cid:87)(cid:76)(cid:16) (cid:80)(cid:76)(cid:93)(cid:72)(cid:85) (cid:11)(cid:46)(cid:76)(cid:81)(cid:74)(cid:80)(cid:68) (cid:68)(cid:81)(cid:71) (cid:37)(cid:68)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:12) (cid:90)(cid:76)(cid:87)(cid:75) (cid:68) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:85)(cid:68)(cid:87)(cid:72) (cid:82)(cid:73) (cid:19)(cid:17)(cid:19)(cid:19)(cid:19)(cid:22)(cid:17) (cid:58)(cid:72) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:73)(cid:82)(cid:85) (cid:27) (cid:72)(cid:83)(cid:82)(cid:70)(cid:75)(cid:86) (cid:68)(cid:81)(cid:71) (cid:86)(cid:72)(cid:16) (cid:79)(cid:72)(cid:70)(cid:87) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:90)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72) (cid:75)(cid:76)(cid:74)(cid:75)(cid:72)(cid:86)(cid:87) (cid:41)(cid:16)(cid:86)(cid:70)(cid:82)(cid:85)(cid:72) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:71)(cid:72)(cid:16) (cid:89)(cid:72)(cid:79)(cid:82)(cid:83)(cid:80)(cid:72)(cid:81)(cid:87) (cid:86)(cid:72)(cid:87) (cid:73)(cid:82)(cid:85) (cid:87)(cid:72)(cid:86)(cid:87)(cid:76)(cid:81)(cid:74)(cid:17) (cid:39)(cid:85)(cid:82)(cid:83)(cid:82)(cid:88)(cid:87) (cid:85)(cid:68)(cid:87)(cid:72) (cid:76)(cid:86) (cid:86)(cid:72)(cid:87) (cid:68)(cid:87) (cid:19)(cid:17)(cid:21) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:73)(cid:88)(cid:79)(cid:79)(cid:92) (cid:70)(cid:82)(cid:81)(cid:81)(cid:72)(cid:70)(cid:87)(cid:72)(cid:71) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85)(cid:86)(cid:17) (cid:58)(cid:72) (cid:88)(cid:86)(cid:72) (cid:88)(cid:81)(cid:76)(cid:73)(cid:82)(cid:85)(cid:80) (cid:76)(cid:81)(cid:76)(cid:16) (cid:87)(cid:76)(cid:68)(cid:79)(cid:76)(cid:93)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:87)(cid:75)(cid:72) (cid:90)(cid:72)(cid:76)(cid:74)(cid:75)(cid:87) (cid:80)(cid:68)(cid:87)(cid:85)(cid:76)(cid:70)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:93)(cid:72)(cid:85)(cid:82) (cid:76)(cid:81)(cid:76)(cid:87)(cid:76)(cid:68)(cid:79)(cid:16) (cid:76)(cid:93)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:69)(cid:76)(cid:68)(cid:86)(cid:72)(cid:86)(cid:17) (cid:23)(cid:17)(cid:23) (cid:37)(cid:68)(cid:86)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:48)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:48)(cid:82)(cid:71)(cid:72)(cid:79) (cid:57)(cid:68)(cid:85)(cid:76)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:58)(cid:72) (cid:70)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:72) (cid:82)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:49)(cid:39)(cid:51)(cid:53) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:90)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:69)(cid:68)(cid:86)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:76)(cid:80)(cid:83)(cid:79)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87) (cid:87)(cid:90)(cid:82) (cid:83)(cid:68)(cid:85)(cid:68)(cid:79)(cid:79)(cid:72)(cid:79) (cid:72)(cid:91)(cid:83)(cid:72)(cid:85)(cid:16) (cid:76)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86)(cid:29) (cid:135) (cid:48)(cid:40)(cid:51)(cid:53) (cid:29) (cid:55)(cid:75)(cid:76)(cid:86) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:76)(cid:86) (cid:83)(cid:85)(cid:82)(cid:89)(cid:76)(cid:71)(cid:72)(cid:71) (cid:69)(cid:92) (cid:60)(cid:68)(cid:81)(cid:74) (cid:11)(cid:21)(cid:19)(cid:20)(cid:24)(cid:12)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:88)(cid:86)(cid:72)(cid:86) (cid:68) (cid:80)(cid:68)(cid:91)(cid:76)(cid:80)(cid:88)(cid:80) (cid:72)(cid:81)(cid:87)(cid:85)(cid:82)(cid:83)(cid:92) (cid:11)(cid:48)(cid:40)(cid:12) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:72)(cid:85) (cid:90)(cid:76)(cid:87)(cid:75) (cid:75)(cid:68)(cid:81)(cid:71)(cid:16)(cid:70)(cid:85)(cid:68)(cid:73)(cid:87)(cid:72)(cid:71) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72)(cid:86) (cid:87)(cid:82) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:17) (cid:135) (cid:49)(cid:53)(cid:48) (cid:29) (cid:55)(cid:75)(cid:76)(cid:86) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:76)(cid:86) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:69)(cid:92) (cid:61)(cid:75)(cid:68)(cid:81)(cid:74) (cid:11)(cid:21)(cid:19)(cid:20)(cid:25)(cid:12)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:88)(cid:86)(cid:72)(cid:86) (cid:87)(cid:90)(cid:82) (cid:76)(cid:81)(cid:71)(cid:72)(cid:83)(cid:72)(cid:81)(cid:16) (cid:71)(cid:72)(cid:81)(cid:87) (cid:80)(cid:88)(cid:79)(cid:87)(cid:76)(cid:16)(cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:83)(cid:72)(cid:85)(cid:70)(cid:72)(cid:83)(cid:87)(cid:85)(cid:82)(cid:81)(cid:86) (cid:87)(cid:82) (cid:79)(cid:82)(cid:70)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:83)(cid:82)(cid:86)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:68)(cid:81)(cid:71) (cid:71)(cid:72)(cid:87)(cid:72)(cid:85)(cid:80)(cid:76)(cid:81)(cid:72) (cid:87)(cid:75)(cid:72) (cid:87)(cid:92)(cid:83)(cid:72) (cid:82)(cid:73) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:17) (cid:135) (cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:29) (cid:55)(cid:75)(cid:76)(cid:86) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:72)(cid:81)(cid:70)(cid:82)(cid:71)(cid:72)(cid:86) (cid:72)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:87)(cid:68)(cid:85)(cid:74)(cid:72)(cid:87) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:90)(cid:76)(cid:87)(cid:75) (cid:68) (cid:69)(cid:76)(cid:71)(cid:76)(cid:85)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:42)(cid:53)(cid:56)(cid:17) (cid:55)(cid:75)(cid:72) (cid:82)(cid:88)(cid:87)(cid:83)(cid:88)(cid:87) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:72)(cid:81)(cid:70)(cid:82)(cid:71)(cid:72)(cid:85) (cid:76)(cid:86) (cid:88)(cid:86)(cid:72)(cid:71) (cid:87)(cid:82) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87) (cid:87)(cid:75)(cid:72) (cid:87)(cid:92)(cid:83)(cid:72) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:86)(cid:72)(cid:72)(cid:81) (cid:68)(cid:86) (cid:68) (cid:71)(cid:72)(cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:16) (cid:68)(cid:87)(cid:72) (cid:89)(cid:68)(cid:85)(cid:76)(cid:68)(cid:81)(cid:87) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:49)(cid:39)(cid:51)(cid:53) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:90)(cid:76)(cid:87)(cid:75)(cid:82)(cid:88)(cid:87) (cid:87)(cid:75)(cid:72) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:72)(cid:70)(cid:75)(cid:68)(cid:81)(cid:76)(cid:86)(cid:80)(cid:17) (cid:135) (cid:49)(cid:39)(cid:51)(cid:53) (cid:29) (cid:55)(cid:75)(cid:76)(cid:86) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:88)(cid:86)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:72)(cid:81)(cid:70)(cid:82)(cid:71)(cid:72)(cid:85) (cid:90)(cid:76)(cid:87)(cid:75) (cid:69)(cid:82)(cid:87)(cid:75) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:81)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:16) (cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:55)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:85)(cid:72) (cid:76)(cid:81)(cid:76)(cid:87)(cid:76)(cid:68)(cid:79)(cid:76)(cid:93)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:83)(cid:85)(cid:72)(cid:16)(cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:22)(cid:19)(cid:19)(cid:16)(cid:39) (cid:61)(cid:75)(cid:76)(cid:75)(cid:88) (cid:52)(cid:36) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85)(cid:86) (cid:11)(cid:47)(cid:76) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:27)(cid:12) (cid:68)(cid:81)(cid:71) (cid:73)(cid:76)(cid:81)(cid:72)(cid:16)(cid:87)(cid:88)(cid:81)(cid:72)(cid:71) (cid:90)(cid:75)(cid:72)(cid:81) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:39)(cid:51) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:92) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:17) (cid:135) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:85)(cid:68)(cid:81)(cid:71) (cid:29) (cid:54)(cid:68)(cid:80)(cid:72) (cid:68)(cid:86) (cid:49)(cid:39)(cid:51)(cid:53) (cid:72)(cid:91)(cid:70)(cid:72)(cid:83)(cid:87) (cid:87)(cid:75)(cid:68)(cid:87) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:85)(cid:72) (cid:85)(cid:68)(cid:81)(cid:71)(cid:82)(cid:80)(cid:79)(cid:92) (cid:76)(cid:81)(cid:76)(cid:87)(cid:76)(cid:68)(cid:79)(cid:76)(cid:93)(cid:72)(cid:71)(cid:17) (cid:135) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:51)(cid:38)(cid:16)(cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:29) (cid:54)(cid:68)(cid:80)(cid:72) (cid:68)(cid:86) (cid:49)(cid:39)(cid:51)(cid:53) (cid:69)(cid:88)(cid:87) (cid:87)(cid:75)(cid:72) (cid:72)(cid:81)(cid:70)(cid:82)(cid:71)(cid:72)(cid:85) (cid:82)(cid:73) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72) X (cid:76)(cid:86) (cid:85)(cid:72)(cid:83)(cid:79)(cid:68)(cid:70)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:51)(cid:38)(cid:16) (cid:37)(cid:76)(cid:42)(cid:53)(cid:56)(cid:17) (cid:58)(cid:72) (cid:68)(cid:79)(cid:86)(cid:82) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80) (cid:87)(cid:90)(cid:82) (cid:68)(cid:69)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:72)(cid:91)(cid:83)(cid:72)(cid:85)(cid:76)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:87)(cid:82) (cid:72)(cid:91)(cid:16) (cid:83)(cid:79)(cid:82)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:29) (cid:135) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:54) (cid:29) (cid:54)(cid:68)(cid:80)(cid:72) (cid:68)(cid:86) (cid:49)(cid:39)(cid:51)(cid:53) (cid:69)(cid:88)(cid:87) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:76)(cid:81)(cid:74) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:82)(cid:81)(cid:79)(cid:92) (cid:88)(cid:87)(cid:76)(cid:79)(cid:76)(cid:93)(cid:72)(cid:86) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:55)(cid:75)(cid:72) (cid:82)(cid:88)(cid:87)(cid:83)(cid:88)(cid:87) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:80)(cid:68)(cid:78)(cid:72)(cid:86) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:90)(cid:76)(cid:87)(cid:75) (cid:82)(cid:81)(cid:79)(cid:92) (cid:39)(cid:51) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) h n (cid:68)(cid:81)(cid:71) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:76)(cid:81)(cid:16) (cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) s n (cid:17) (cid:135) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:58) (cid:29) (cid:54)(cid:68)(cid:80)(cid:72) (cid:68)(cid:86) (cid:49)(cid:39)(cid:51)(cid:53) (cid:69)(cid:88)(cid:87) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:76)(cid:81)(cid:74) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:82)(cid:81)(cid:79)(cid:92) (cid:88)(cid:87)(cid:76)(cid:79)(cid:76)(cid:93)(cid:72)(cid:86) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:55)(cid:75)(cid:72) (cid:82)(cid:88)(cid:87)(cid:83)(cid:88)(cid:87) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:80)(cid:68)(cid:78)(cid:72)(cid:86) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:90)(cid:76)(cid:87)(cid:75) (cid:82)(cid:81)(cid:79)(cid:92) (cid:39)(cid:51) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) h n (cid:68)(cid:81)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) w n (cid:17) (cid:24) (cid:53)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:68)(cid:81)(cid:71) (cid:39)(cid:76)(cid:86)(cid:70)(cid:88)(cid:86)(cid:86)(cid:76)(cid:82)(cid:81) (cid:24)(cid:17)(cid:20) (cid:48)(cid:68)(cid:76)(cid:81) (cid:53)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:20) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:72)(cid:91)(cid:83)(cid:72)(cid:85)(cid:76)(cid:80)(cid:72)(cid:81)(cid:87)(cid:68)(cid:79) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:69)(cid:68)(cid:86)(cid:72)(cid:16) (cid:79)(cid:76)(cid:81)(cid:72) (cid:86)(cid:92)(cid:86)(cid:87)(cid:72)(cid:80)(cid:86) (cid:68)(cid:81)(cid:71) (cid:89)(cid:68)(cid:85)(cid:76)(cid:68)(cid:81)(cid:87)(cid:86) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:49)(cid:39)(cid:51)(cid:53) 898 (cid:48)(cid:82)(cid:71)(cid:72)(cid:79) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:54)(cid:48)(cid:54) (cid:55)(cid:38) (cid:82)(cid:73) (cid:50)(cid:81)(cid:87)(cid:82)(cid:49)(cid:82)(cid:87)(cid:72)(cid:86) (cid:37)(cid:68)(cid:76)(cid:71)(cid:88)(cid:61)(cid:75)(cid:76)(cid:71)(cid:68)(cid:82) (cid:51)(cid:11)(cid:8)(cid:12) (cid:53)(cid:11)(cid:8)(cid:12) (cid:41) (cid:51)(cid:11)(cid:8)(cid:12) (cid:53)(cid:11)(cid:8)(cid:12) (cid:41) (cid:51)(cid:11)(cid:8)(cid:12) (cid:53)(cid:11)(cid:8)(cid:12) (cid:41) (cid:48)(cid:40)(cid:51)(cid:53) (cid:11)(cid:60)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:12) (cid:22)(cid:26)(cid:17)(cid:21)(cid:26) (cid:23)(cid:24)(cid:17)(cid:24)(cid:26) (cid:22)(cid:27)(cid:17)(cid:26)(cid:25) (cid:16) (cid:16) (cid:16) (cid:16) (cid:16) (cid:16) (cid:49)(cid:53)(cid:48) (cid:11)(cid:61)(cid:75)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:25)(cid:12) (cid:22)(cid:26)(cid:17)(cid:20)(cid:20) (cid:23)(cid:23)(cid:17)(cid:19)(cid:26) (cid:22)(cid:28)(cid:17)(cid:19)(cid:22) (cid:21)(cid:22)(cid:17)(cid:20)(cid:21) (cid:21)(cid:25)(cid:17)(cid:19)(cid:28) (cid:21)(cid:21)(cid:17)(cid:27)(cid:19) (cid:21)(cid:25)(cid:17)(cid:27)(cid:26) (cid:23)(cid:28)(cid:17)(cid:23)(cid:23) (cid:22)(cid:23)(cid:17)(cid:24)(cid:23) (cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:23)(cid:19)(cid:17)(cid:20)(cid:27) (cid:23)(cid:24)(cid:17)(cid:22)(cid:21) (cid:23)(cid:21)(cid:17)(cid:25)(cid:26) (cid:21)(cid:24)(cid:17)(cid:25)(cid:23) (cid:22)(cid:25)(cid:17)(cid:27)(cid:21) (cid:22)(cid:19)(cid:17)(cid:28)(cid:22) (cid:21)(cid:28)(cid:17)(cid:22)(cid:24) (cid:23)(cid:21)(cid:17)(cid:22)(cid:27) (cid:22)(cid:24)(cid:17)(cid:27)(cid:22) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:85)(cid:68)(cid:81)(cid:71) (cid:23)(cid:25)(cid:17)(cid:23)(cid:26) (cid:23)(cid:22)(cid:17)(cid:21)(cid:22) (cid:23)(cid:22)(cid:17)(cid:24)(cid:27) (cid:21)(cid:27)(cid:17)(cid:28)(cid:27) (cid:23)(cid:20)(cid:17)(cid:24)(cid:19) (cid:22)(cid:22)(cid:17)(cid:22)(cid:27) (cid:22)(cid:24)(cid:17)(cid:23)(cid:23) (cid:23)(cid:22)(cid:17)(cid:27)(cid:21) (cid:22)(cid:26)(cid:17)(cid:26)(cid:28) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:51)(cid:38)(cid:16)(cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:23)(cid:25)(cid:17)(cid:22)(cid:23) (cid:23)(cid:25)(cid:17)(cid:21)(cid:20) (cid:23)(cid:25)(cid:17)(cid:21)(cid:26) (cid:22)(cid:25)(cid:17)(cid:25)(cid:28) (cid:23)(cid:19)(cid:17)(cid:20)(cid:21) (cid:22)(cid:27)(cid:17)(cid:22)(cid:22) (cid:22)(cid:27)(cid:17)(cid:23)(cid:21) (cid:23)(cid:27)(cid:17)(cid:19)(cid:20) (cid:23)(cid:20)(cid:17)(cid:25)(cid:27) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:58) (cid:23)(cid:25)(cid:17)(cid:26)(cid:27) (cid:23)(cid:25)(cid:17)(cid:25)(cid:20) (cid:23)(cid:24)(cid:17)(cid:26)(cid:25) (cid:22)(cid:27)(cid:17)(cid:25)(cid:26) (cid:23)(cid:20)(cid:17)(cid:24)(cid:25) (cid:22)(cid:28)(cid:17)(cid:25)(cid:23) (cid:22)(cid:27)(cid:17)(cid:25)(cid:19) (cid:24)(cid:19)(cid:17)(cid:20)(cid:21) (cid:23)(cid:22)(cid:17)(cid:22)(cid:25) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:54) (cid:23)(cid:25)(cid:17)(cid:28)(cid:28) (cid:23)(cid:25)(cid:17)(cid:22)(cid:21) (cid:23)(cid:23)(cid:17)(cid:27)(cid:28) (cid:22)(cid:26)(cid:17)(cid:23)(cid:19) (cid:23)(cid:19)(cid:17)(cid:22)(cid:21) (cid:22)(cid:27)(cid:17)(cid:27)(cid:20) (cid:22)(cid:28)(cid:17)(cid:22)(cid:21) (cid:23)(cid:25)(cid:17)(cid:23)(cid:19) (cid:23)(cid:20)(cid:17)(cid:24)(cid:22) (cid:49)(cid:39)(cid:51)(cid:53) (cid:23)(cid:28)(cid:17)(cid:22)(cid:28) (cid:23)(cid:23)(cid:17)(cid:27)(cid:28) (cid:23)(cid:25)(cid:17)(cid:22)(cid:28) (cid:22)(cid:28)(cid:17)(cid:25)(cid:22) (cid:23)(cid:22)(cid:17)(cid:19)(cid:28) (cid:22)(cid:28)(cid:17)(cid:26)(cid:26) (cid:23)(cid:20)(cid:17)(cid:19)(cid:23) (cid:23)(cid:25)(cid:17)(cid:24)(cid:24) (cid:23)(cid:21)(cid:17)(cid:28)(cid:23) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:20)(cid:29) (cid:53)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:76)(cid:81) (cid:87)(cid:72)(cid:85)(cid:80)(cid:86) (cid:82)(cid:73) (cid:83)(cid:85)(cid:72)(cid:70)(cid:76)(cid:86)(cid:76)(cid:82)(cid:81)(cid:15) (cid:85)(cid:72)(cid:70)(cid:68)(cid:79)(cid:79) (cid:68)(cid:81)(cid:71) (cid:41)(cid:16)(cid:86)(cid:70)(cid:82)(cid:85)(cid:72) (cid:82)(cid:81) (cid:20)(cid:25) (cid:87)(cid:92)(cid:83)(cid:72)(cid:86) (cid:82)(cid:73) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:83)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:72)(cid:71) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72) (cid:69)(cid:68)(cid:86)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:86)(cid:92)(cid:86)(cid:87)(cid:72)(cid:80)(cid:86) (cid:68)(cid:81)(cid:71) (cid:89)(cid:68)(cid:85)(cid:76)(cid:68)(cid:81)(cid:87)(cid:86) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:49)(cid:39)(cid:51)(cid:53) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:17) (cid:41)(cid:82)(cid:85) (cid:49)(cid:53)(cid:48) (cid:11)(cid:61)(cid:75)(cid:68)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:25)(cid:12)(cid:15) (cid:90)(cid:72) (cid:76)(cid:80)(cid:83)(cid:79)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:68)(cid:86) (cid:71)(cid:72)(cid:86)(cid:70)(cid:85)(cid:76)(cid:69)(cid:72)(cid:71) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:83)(cid:68)(cid:83)(cid:72)(cid:85)(cid:17) (cid:55)(cid:68)(cid:74) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:54) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:58) (cid:49)(cid:39)(cid:51)(cid:53) (cid:11)(cid:80)(cid:68)(cid:86)(cid:70)(cid:88)(cid:79)(cid:76)(cid:81)(cid:72) (cid:87)(cid:75)(cid:72)(cid:92)(cid:12) (cid:20)(cid:26)(cid:17)(cid:19)(cid:24) (cid:21)(cid:22)(cid:17)(cid:21)(cid:27) (cid:21)(cid:23)(cid:17)(cid:23)(cid:23) (cid:11)(cid:86)(cid:75)(cid:72)(cid:12) (cid:22)(cid:21)(cid:17)(cid:22)(cid:24) (cid:22)(cid:22)(cid:17)(cid:26)(cid:21) (cid:22)(cid:24)(cid:17)(cid:20)(cid:23) (cid:83)(cid:85)(cid:72)(cid:89)(cid:76)(cid:82)(cid:88)(cid:86) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72) (cid:27)(cid:23)(cid:17)(cid:28)(cid:19) (cid:27)(cid:25)(cid:17)(cid:19)(cid:27) (cid:27)(cid:26)(cid:17)(cid:24)(cid:24) (cid:11)(cid:75)(cid:72)(cid:12) (cid:21)(cid:28)(cid:17)(cid:19)(cid:24) (cid:22)(cid:20)(cid:17)(cid:21)(cid:19) (cid:22)(cid:23)(cid:17)(cid:28)(cid:21) (cid:11)(cid:76)(cid:87)(cid:12) (cid:21)(cid:24)(cid:17)(cid:19)(cid:19) (cid:21)(cid:25)(cid:17)(cid:25)(cid:26) (cid:21)(cid:25)(cid:17)(cid:28)(cid:24) (cid:11)(cid:73)(cid:72)(cid:80)(cid:76)(cid:81)(cid:76)(cid:81)(cid:72) (cid:87)(cid:75)(cid:72)(cid:92)(cid:12) (cid:19) (cid:19) (cid:23)(cid:19)(cid:17)(cid:19)(cid:19) (cid:11)(cid:44)(cid:12) (cid:24)(cid:19)(cid:17)(cid:25)(cid:25) (cid:24)(cid:19)(cid:17)(cid:28)(cid:19) (cid:24)(cid:21)(cid:17)(cid:28)(cid:27) (cid:11)(cid:90)(cid:72)(cid:12) (cid:22)(cid:20)(cid:17)(cid:23)(cid:28) (cid:22)(cid:22)(cid:17)(cid:24)(cid:26) (cid:22)(cid:23)(cid:17)(cid:27)(cid:20) (cid:11)(cid:86)(cid:76)(cid:81)(cid:74)(cid:88)(cid:79)(cid:68)(cid:85) (cid:92)(cid:82)(cid:88)(cid:12) (cid:23)(cid:21)(cid:17)(cid:27)(cid:27) (cid:23)(cid:23)(cid:17)(cid:20)(cid:24) (cid:23)(cid:23)(cid:17)(cid:22)(cid:20) (cid:83)(cid:79)(cid:72)(cid:82)(cid:81)(cid:68)(cid:86)(cid:87)(cid:76)(cid:70) (cid:21)(cid:24)(cid:17)(cid:27)(cid:28) (cid:21)(cid:21)(cid:17)(cid:21)(cid:28) (cid:21)(cid:27)(cid:17)(cid:23)(cid:25) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:76)(cid:70) (cid:20)(cid:20)(cid:17)(cid:25)(cid:20) (cid:20)(cid:20)(cid:17)(cid:19)(cid:27) (cid:20)(cid:25)(cid:17)(cid:27)(cid:22) (cid:72)(cid:89)(cid:72)(cid:81)(cid:87) (cid:25)(cid:17)(cid:20)(cid:24) (cid:19) (cid:20)(cid:25)(cid:17)(cid:21)(cid:26) (cid:72)(cid:91)(cid:76)(cid:86)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:68)(cid:79) (cid:22)(cid:23)(cid:17)(cid:20)(cid:26) (cid:22)(cid:19)(cid:17)(cid:27)(cid:23) (cid:22)(cid:27)(cid:17)(cid:26)(cid:20) (cid:11)(cid:83)(cid:79)(cid:88)(cid:85)(cid:68)(cid:79) (cid:92)(cid:82)(cid:88)(cid:12) (cid:19) (cid:19) (cid:24)(cid:17)(cid:23)(cid:20) (cid:11)(cid:76)(cid:81)(cid:68)(cid:81)(cid:76)(cid:80)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72)(cid:92)(cid:12) (cid:20)(cid:25)(cid:17)(cid:19)(cid:19) (cid:20)(cid:28)(cid:17)(cid:20)(cid:24) (cid:20)(cid:22)(cid:17)(cid:27)(cid:28) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:21)(cid:29) (cid:41)(cid:16)(cid:86)(cid:70)(cid:82)(cid:85)(cid:72)(cid:86) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:49)(cid:39)(cid:51)(cid:53) (cid:68)(cid:81)(cid:71) (cid:76)(cid:87)(cid:86) (cid:87)(cid:90)(cid:82) (cid:89)(cid:68)(cid:85)(cid:76)(cid:68)(cid:81)(cid:87)(cid:86) (cid:11)(cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:54)(cid:15) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:58)(cid:12) (cid:73)(cid:82)(cid:85) (cid:70)(cid:82)(cid:81)(cid:70)(cid:85)(cid:72)(cid:87)(cid:72) (cid:68)(cid:81)(cid:71) (cid:68)(cid:69)(cid:86)(cid:87)(cid:85)(cid:68)(cid:70)(cid:87) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:54)(cid:48)(cid:54) (cid:87)(cid:72)(cid:86)(cid:87) (cid:86)(cid:72)(cid:87)(cid:17) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:82)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:54)(cid:48)(cid:54)(cid:15) (cid:55)(cid:38) (cid:86)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:50)(cid:81)(cid:87)(cid:82)(cid:49)(cid:82)(cid:87)(cid:72)(cid:86)(cid:15) (cid:68)(cid:81)(cid:71) (cid:37)(cid:68)(cid:76)(cid:71)(cid:88)(cid:61)(cid:75)(cid:76)(cid:71)(cid:68)(cid:82)(cid:17) (cid:58)(cid:72) (cid:70)(cid:68)(cid:81) (cid:86)(cid:72)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:68)(cid:81)(cid:71) (cid:76)(cid:87)(cid:86) (cid:89)(cid:68)(cid:85)(cid:76)(cid:68)(cid:81)(cid:87)(cid:86) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80) (cid:87)(cid:75)(cid:72) (cid:69)(cid:68)(cid:86)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:82)(cid:81) (cid:68)(cid:79)(cid:79) (cid:87)(cid:75)(cid:72)(cid:86)(cid:72) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:86) (cid:69)(cid:92) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:80)(cid:68)(cid:85)(cid:74)(cid:76)(cid:81)(cid:86)(cid:17) (cid:50)(cid:88)(cid:85) (cid:69)(cid:72)(cid:86)(cid:87) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:15) (cid:49)(cid:39)(cid:51)(cid:53)(cid:15) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:86) (cid:48)(cid:40)(cid:51)(cid:53) (cid:69)(cid:92) (cid:26)(cid:17)(cid:25)(cid:22)(cid:8) (cid:76)(cid:81) (cid:87)(cid:72)(cid:85)(cid:80)(cid:86) (cid:82)(cid:73) (cid:41)(cid:16)(cid:86)(cid:70)(cid:82)(cid:85)(cid:72) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:54)(cid:48)(cid:54) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:15) (cid:68)(cid:81)(cid:71) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:86) (cid:49)(cid:53)(cid:48) (cid:69)(cid:92) (cid:20)(cid:25)(cid:17)(cid:28)(cid:26)(cid:8) (cid:68)(cid:81)(cid:71) (cid:27)(cid:17)(cid:23)(cid:19)(cid:8) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:50)(cid:81)(cid:87)(cid:82)(cid:49)(cid:82)(cid:87)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:37)(cid:68)(cid:76)(cid:71)(cid:88)(cid:61)(cid:75)(cid:76)(cid:71)(cid:68)(cid:82) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:86) (cid:85)(cid:72)(cid:86)(cid:83)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:79)(cid:92)(cid:17) (cid:38)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72) (cid:71)(cid:72)(cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:87)(cid:72) (cid:89)(cid:68)(cid:85)(cid:76)(cid:16) (cid:68)(cid:81)(cid:87) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:37)(cid:76)(cid:42)(cid:53)(cid:56)(cid:15) (cid:82)(cid:88)(cid:85) (cid:49)(cid:39)(cid:51)(cid:53) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:68)(cid:79)(cid:86)(cid:82) (cid:83)(cid:72)(cid:85)(cid:16) (cid:73)(cid:82)(cid:85)(cid:80)(cid:86) (cid:69)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:82)(cid:81) (cid:68)(cid:79)(cid:79) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:86)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:71)(cid:72)(cid:80)(cid:82)(cid:81)(cid:16) (cid:86)(cid:87)(cid:85)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:76)(cid:81)(cid:74) (cid:80)(cid:72)(cid:70)(cid:75)(cid:16) (cid:68)(cid:81)(cid:76)(cid:86)(cid:80) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:82)(cid:73) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:81)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:55)(cid:75)(cid:72) (cid:72)(cid:91)(cid:83)(cid:72)(cid:85)(cid:76)(cid:80)(cid:72)(cid:81)(cid:87)(cid:68)(cid:79) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:68)(cid:79)(cid:86)(cid:82) (cid:86)(cid:75)(cid:82)(cid:90) (cid:87)(cid:75)(cid:68)(cid:87) (cid:68)(cid:79)(cid:79) (cid:70)(cid:82)(cid:80)(cid:16) (cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87)(cid:86) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:75)(cid:68)(cid:89)(cid:72) (cid:80)(cid:68)(cid:71)(cid:72) (cid:68) (cid:83)(cid:82)(cid:86)(cid:76)(cid:87)(cid:76)(cid:89)(cid:72) (cid:70)(cid:82)(cid:81)(cid:16) (cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:76)(cid:86) (cid:72)(cid:89)(cid:76)(cid:71)(cid:72)(cid:81)(cid:70)(cid:72)(cid:71) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72) (cid:73)(cid:68)(cid:70)(cid:87) (cid:87)(cid:75)(cid:68)(cid:87) (cid:87)(cid:75)(cid:72) (cid:73)(cid:88)(cid:79)(cid:79) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:49)(cid:39)(cid:51)(cid:53) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:86) (cid:87)(cid:75)(cid:72) (cid:82)(cid:87)(cid:75)(cid:72)(cid:85) (cid:89)(cid:68)(cid:85)(cid:76)(cid:68)(cid:81)(cid:87)(cid:86) (cid:69)(cid:92) (cid:68) (cid:86)(cid:80)(cid:68)(cid:79)(cid:79) (cid:80)(cid:68)(cid:85)(cid:74)(cid:76)(cid:81)(cid:17) (cid:55)(cid:75)(cid:72) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:68)(cid:79)(cid:86)(cid:82) (cid:86)(cid:75)(cid:82)(cid:90) (cid:87)(cid:75)(cid:68)(cid:87) (cid:83)(cid:85)(cid:72)(cid:16) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:75)(cid:68)(cid:89)(cid:72) (cid:75)(cid:68)(cid:71) (cid:68) (cid:83)(cid:82)(cid:86)(cid:76)(cid:87)(cid:76)(cid:89)(cid:72) (cid:76)(cid:80)(cid:16) (cid:83)(cid:68)(cid:70)(cid:87) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:15) (cid:68)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90)(cid:81) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72) (cid:75)(cid:76)(cid:74)(cid:75)(cid:72)(cid:85) (cid:68)(cid:70)(cid:70)(cid:88)(cid:85)(cid:68)(cid:70)(cid:92) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:49)(cid:39)(cid:51)(cid:53) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:82)(cid:89)(cid:72)(cid:85) (cid:87)(cid:75)(cid:72) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:85)(cid:68)(cid:81)(cid:71) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:15) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:85)(cid:72) (cid:85)(cid:68)(cid:81)(cid:71)(cid:82)(cid:80)(cid:79)(cid:92) (cid:76)(cid:81)(cid:76)(cid:87)(cid:76)(cid:68)(cid:79)(cid:16) (cid:76)(cid:93)(cid:72)(cid:71)(cid:17) (cid:51)(cid:38)(cid:16)(cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:72)(cid:81)(cid:70)(cid:82)(cid:71)(cid:72)(cid:85) (cid:86)(cid:72)(cid:72)(cid:80)(cid:86) (cid:87)(cid:82) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80) (cid:90)(cid:82)(cid:85)(cid:86)(cid:72) (cid:87)(cid:75)(cid:68)(cid:81) (cid:89)(cid:68)(cid:81)(cid:76)(cid:79)(cid:79)(cid:68) (cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:72)(cid:81)(cid:70)(cid:82)(cid:71)(cid:72)(cid:85)(cid:15) (cid:68)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90)(cid:81) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72) (cid:86)(cid:79)(cid:76)(cid:74)(cid:75)(cid:87)(cid:79)(cid:92) (cid:79)(cid:82)(cid:90)(cid:72)(cid:85) (cid:41)(cid:16)(cid:86)(cid:70)(cid:82)(cid:85)(cid:72) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:51)(cid:38)(cid:16)(cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:17) (cid:37)(cid:88)(cid:87) (cid:76)(cid:87) (cid:68)(cid:79)(cid:79)(cid:72)(cid:89)(cid:76)(cid:68)(cid:87)(cid:72)(cid:86) (cid:90)(cid:75)(cid:68)(cid:87) (cid:90)(cid:72) (cid:70)(cid:68)(cid:79)(cid:79) (cid:87)(cid:75)(cid:72) (cid:79)(cid:82)(cid:70)(cid:68)(cid:79) (cid:83)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81) (cid:85)(cid:72)(cid:83)(cid:72)(cid:87)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:83)(cid:85)(cid:82)(cid:69)(cid:79)(cid:72)(cid:80)(cid:15) (cid:87)(cid:75)(cid:72) (cid:86)(cid:76)(cid:87)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:68) (cid:39)(cid:51) (cid:76)(cid:86) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:72)(cid:71) (cid:85)(cid:72)(cid:71)(cid:88)(cid:81)(cid:71)(cid:68)(cid:81)(cid:87)(cid:79)(cid:92) (cid:76)(cid:81) (cid:68) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:69)(cid:72)(cid:73)(cid:82)(cid:85)(cid:72) (cid:68) (cid:89)(cid:72)(cid:85)(cid:69) (cid:68)(cid:81)(cid:71) (cid:76)(cid:87)(cid:86) (cid:68)(cid:71)(cid:89)(cid:72)(cid:85)(cid:69)(cid:76)(cid:68)(cid:79) (cid:80)(cid:82)(cid:71)(cid:76)(cid:73)(cid:76)(cid:72)(cid:85)(cid:17) (cid:58)(cid:72) (cid:68)(cid:87)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:72) (cid:87)(cid:75)(cid:76)(cid:86) (cid:87)(cid:82) (cid:87)(cid:75)(cid:68)(cid:87) (cid:51)(cid:38)(cid:16)(cid:37)(cid:76)(cid:42)(cid:53)(cid:56) (cid:88)(cid:86)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) h n 1 (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:79)(cid:68)(cid:86)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71) (cid:69)(cid:72)(cid:73)(cid:82)(cid:85)(cid:72) (cid:39)(cid:51) (cid:76)(cid:81)(cid:86)(cid:87)(cid:72)(cid:68)(cid:71) (cid:82)(cid:73) h n (cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:80)(cid:68)(cid:78)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:83)(cid:68)(cid:92) (cid:80)(cid:82)(cid:85)(cid:72) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:72)(cid:70)(cid:72)(cid:71)(cid:16) (cid:76)(cid:81)(cid:74) (cid:80)(cid:82)(cid:71)(cid:76)(cid:73)(cid:76)(cid:72)(cid:85)(cid:17) (cid:41)(cid:82)(cid:85) (cid:87)(cid:75)(cid:72) (cid:37)(cid:68)(cid:76)(cid:71)(cid:88)(cid:61)(cid:75)(cid:76)(cid:71)(cid:68)(cid:82) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:15) (cid:87)(cid:75)(cid:72) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:58) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:68)(cid:70)(cid:87)(cid:88)(cid:68)(cid:79)(cid:79)(cid:92) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:86) (cid:69)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:87)(cid:75)(cid:68)(cid:81) (cid:49)(cid:39)(cid:51)(cid:53) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:15) (cid:68)(cid:86) (cid:76)(cid:81)(cid:71)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72)(cid:71) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72) (cid:75)(cid:76)(cid:74)(cid:75)(cid:72)(cid:85) (cid:41)(cid:16)(cid:86)(cid:70)(cid:82)(cid:85)(cid:72) (cid:73)(cid:82)(cid:85) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:58) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:79)(cid:68)(cid:86)(cid:87) (cid:70)(cid:82)(cid:79)(cid:88)(cid:80)(cid:81) (cid:82)(cid:73) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:20)(cid:17) (cid:58)(cid:72) (cid:68)(cid:87)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:72) (cid:87)(cid:75)(cid:76)(cid:86) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:73)(cid:68)(cid:70)(cid:87) (cid:87)(cid:75)(cid:68)(cid:87) (cid:87)(cid:75)(cid:72)(cid:85)(cid:72) (cid:68)(cid:85)(cid:72) (cid:82)(cid:81)(cid:79)(cid:92) (cid:70)(cid:82)(cid:81)(cid:70)(cid:85)(cid:72)(cid:87)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:76)(cid:81) (cid:87)(cid:75)(cid:76)(cid:86) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:17) (cid:55)(cid:75)(cid:72) (cid:70)(cid:82)(cid:80)(cid:69)(cid:76)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:179) (cid:11)(cid:44)(cid:12)(cid:180)(cid:15) (cid:179) (cid:11)(cid:86)(cid:76)(cid:81)(cid:74)(cid:88)(cid:16) (cid:79)(cid:68)(cid:85) (cid:92)(cid:82)(cid:88)(cid:12)(cid:180) (cid:68)(cid:81)(cid:71) (cid:179) (cid:11)(cid:76)(cid:87)(cid:12)(cid:180) (cid:68)(cid:70)(cid:70)(cid:82)(cid:88)(cid:81)(cid:87)(cid:86) (cid:73)(cid:82)(cid:85) 94 .",
"47 (cid:8) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:82)(cid:89)(cid:72)(cid:85)(cid:68)(cid:79)(cid:79) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:83)(cid:82)(cid:83)(cid:88)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:73)(cid:82)(cid:85) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:72)(cid:68)(cid:86)(cid:76)(cid:79)(cid:92) (cid:71)(cid:72)(cid:87)(cid:72)(cid:85)(cid:80)(cid:76)(cid:81)(cid:72)(cid:71) (cid:69)(cid:92) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:16) (cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:48)(cid:82)(cid:85)(cid:72)(cid:82)(cid:89)(cid:72)(cid:85)(cid:15) (cid:87)(cid:75)(cid:72) (cid:73)(cid:72)(cid:90)(cid:72)(cid:85) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:88)(cid:85)(cid:81)(cid:86) (cid:76)(cid:81) (cid:87)(cid:75)(cid:76)(cid:86) (cid:71)(cid:68)(cid:87)(cid:68) (cid:86)(cid:72)(cid:87) (cid:80)(cid:72)(cid:68)(cid:81) (cid:87)(cid:75)(cid:72)(cid:85)(cid:72) (cid:68)(cid:85)(cid:72) (cid:73)(cid:72)(cid:90) (cid:76)(cid:85)(cid:85)(cid:72)(cid:79)(cid:72)(cid:89)(cid:68)(cid:81)(cid:87) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:16) (cid:72)(cid:81)(cid:87)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:81)(cid:72)(cid:72)(cid:71) (cid:87)(cid:82) (cid:69)(cid:72) (cid:73)(cid:76)(cid:79)(cid:87)(cid:72)(cid:85)(cid:72)(cid:71) (cid:82)(cid:88)(cid:87) (cid:69)(cid:92) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:24)(cid:17)(cid:21) (cid:36)(cid:69)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:54)(cid:87)(cid:88)(cid:71)(cid:92) (cid:44)(cid:81) (cid:87)(cid:75)(cid:76)(cid:86) (cid:86)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:90)(cid:72) (cid:71)(cid:76)(cid:89)(cid:72) (cid:68) (cid:69)(cid:76)(cid:87) (cid:71)(cid:72)(cid:72)(cid:83)(cid:72)(cid:85) (cid:68)(cid:81)(cid:71) (cid:79)(cid:82)(cid:82)(cid:78) (cid:68)(cid:87) (cid:87)(cid:75)(cid:72) (cid:76)(cid:80)(cid:83)(cid:68)(cid:70)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:72)(cid:70)(cid:75)(cid:68)(cid:81)(cid:76)(cid:86)(cid:80) (cid:82)(cid:81) (cid:70)(cid:82)(cid:81)(cid:70)(cid:85)(cid:72)(cid:87)(cid:72) (cid:68)(cid:81)(cid:71) (cid:36)(cid:29)(cid:3)(cid:3)(cid:3) (cid:37)(cid:29)(cid:3)(cid:3)(cid:3) (cid:36)(cid:29)(cid:3)(cid:3)(cid:3) (cid:3)(cid:3)(cid:3)(cid:3)(cid:1003)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1758)(cid:1262)(cid:3)(cid:3)(cid:1095)(cid:1162)(cid:3)(cid:3)(cid:1001)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:20698)(cid:3)(cid:3)(cid:3)(cid:3)(cid:60)(cid:82)(cid:88)(cid:3)(cid:3)(cid:3)(cid:73)(cid:76)(cid:81)(cid:68)(cid:79)(cid:79)(cid:92)(cid:3)(cid:3)(cid:3)(cid:86)(cid:75)(cid:82)(cid:90)(cid:72)(cid:71)(cid:3)(cid:3)(cid:3)(cid:88)(cid:83)(cid:17)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1003)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1336)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:2468)(cid:1062)(cid:3)(cid:3)(cid:3)(cid:3)(cid:20698)(cid:3)(cid:3)(cid:3)(cid:3)(cid:39)(cid:82)(cid:81)(cid:10)(cid:87)(cid:3)(cid:3)(cid:87)(cid:68)(cid:79)(cid:78)(cid:3)(cid:3)(cid:81)(cid:82)(cid:81)(cid:86)(cid:72)(cid:81)(cid:86)(cid:72)(cid:17)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1132)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1405)(cid:1405)(cid:3)(cid:3)(cid:3)(cid:3)(cid:20698)(cid:3)(cid:3)(cid:3)(cid:43)(cid:68)(cid:75)(cid:68)(cid:17) (cid:37)(cid:29)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1089)(cid:1608)(cid:3)(cid:3)(cid:995)(cid:3)(cid:3)(cid:3)(cid:3)(cid:994)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1261)(cid:2052)(cid:3)(cid:3)(cid:3)(cid:1653)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1137)(cid:3)(cid:3)(cid:3)(cid:995)(cid:3)(cid:3)(cid:1175)(cid:1468)(cid:11)(cid:71)(cid:68)(cid:88)(cid:74)(cid:75)(cid:87)(cid:72)(cid:85)(cid:12)(cid:3)(cid:3)(cid:3)(cid:1971)(cid:2567)(cid:3)(cid:3)(cid:3)(cid:1001)(cid:3)(cid:3)(cid:3)(cid:48)(cid:92)(cid:3)(cid:3)(cid:71)(cid:68)(cid:88)(cid:74)(cid:75)(cid:87)(cid:72)(cid:85)(cid:3)(cid:3)(cid:69)(cid:85)(cid:82)(cid:78)(cid:72)(cid:3)(cid:3)(cid:80)(cid:92)(cid:3)(cid:3)(cid:70)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:72)(cid:85)(cid:3)(cid:3)(cid:85)(cid:72)(cid:70)(cid:72)(cid:81)(cid:87)(cid:79)(cid:92)(cid:17)(cid:36)(cid:29)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1405)(cid:1405)(cid:3)(cid:3)(cid:1376)(cid:11)(cid:86)(cid:75)(cid:72)(cid:12)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1071)(cid:3)(cid:3)(cid:1006)(cid:3)(cid:3)(cid:1020)(cid:3)(cid:3)(cid:1850)(cid:1045)(cid:3)(cid:1376)(cid:11)(cid:86)(cid:75)(cid:72)(cid:12)(cid:3)(cid:3)(cid:998)(cid:3)(cid:3)(cid:2174)(cid:1620)(cid:3)(cid:3)(cid:3)(cid:43)(cid:68)(cid:75)(cid:68)(cid:17)(cid:3)(cid:3)(cid:54)(cid:75)(cid:72)(cid:3)(cid:3)(cid:70)(cid:68)(cid:81)(cid:3)(cid:72)(cid:89)(cid:72)(cid:81)(cid:3)(cid:71)(cid:82)(cid:3)(cid:3)(cid:87)(cid:75)(cid:68)(cid:87)(cid:17)(cid:3)(cid:3)(cid:54)(cid:75)(cid:72)(cid:3)(cid:76)(cid:86)(cid:3)(cid:81)(cid:82)(cid:87)(cid:3)(cid:3)(cid:87)(cid:82)(cid:3)(cid:69)(cid:72)(cid:3)(cid:88)(cid:81)(cid:71)(cid:72)(cid:85)(cid:16)(cid:72)(cid:86)(cid:87)(cid:76)(cid:80)(cid:68)(cid:87)(cid:72)(cid:71)(cid:17)(cid:37)(cid:29)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:4968)(cid:3)(cid:3)(cid:3)(cid:20895)(cid:3)(cid:3)(cid:3)(cid:2813)(cid:2567)(cid:1223)(cid:3)(cid:3)(cid:3)(cid:1076)(cid:3)(cid:3)(cid:3)(cid:1585)(cid:3)(cid:3)(cid:3)(cid:20698)(cid:3)(cid:3)(cid:3)(cid:60)(cid:72)(cid:86)(cid:15)(cid:3)(cid:3)(cid:89)(cid:72)(cid:85)(cid:92)(cid:3)(cid:3)(cid:71)(cid:72)(cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:17)(cid:37)(cid:29)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1376)(cid:11)(cid:86)(cid:75)(cid:72)(cid:12)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1123)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1001)(cid:3)(cid:3)(cid:3)(cid:3)(cid:996)(cid:3)(cid:3)(cid:3)(cid:3)(cid:2465)(cid:3)(cid:3)(cid:3)(cid:3)(cid:1159)(cid:1920)(cid:3)(cid:3)(cid:3)(cid:3)(cid:20698)(cid:3)(cid:3)(cid:3)(cid:54)(cid:75)(cid:72)(cid:3)(cid:3)(cid:3)(cid:77)(cid:88)(cid:86)(cid:87)(cid:3)(cid:3)(cid:3)(cid:88)(cid:86)(cid:72)(cid:71)(cid:3)(cid:3)(cid:3)(cid:68)(cid:3)(cid:3)(cid:3)(cid:73)(cid:76)(cid:81)(cid:74)(cid:72)(cid:85)(cid:17) (cid:51)(cid:85)(cid:82)(cid:16)(cid:71)(cid:85)(cid:82)(cid:83)(cid:3)(cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:29)(cid:3)(cid:3)(cid:36)(cid:29)(cid:3)(cid:3)(cid:11)(cid:1376)(cid:12)(cid:11)(cid:86)(cid:75)(cid:72)(cid:12)(cid:3)(cid:3)(cid:1000)(cid:3)(cid:1123)(cid:994)(cid:3)(cid:7295)(cid:1317)(cid:3)(cid:1071)(cid:1000)(cid:3)(cid:8067)(cid:1159)(cid:20914)(cid:36)(cid:29)(cid:3)(cid:3)(cid:58)(cid:75)(cid:68)(cid:87)(cid:3)(cid:71)(cid:76)(cid:71)(cid:3)(cid:11)(cid:86)(cid:75)(cid:72)(cid:12)(cid:3)(cid:88)(cid:86)(cid:72)(cid:15)(cid:3)(cid:68)(cid:3)(cid:75)(cid:68)(cid:80)(cid:80)(cid:72)(cid:85)(cid:3)(cid:82)(cid:85)(cid:3)(cid:90)(cid:85)(cid:72)(cid:81)(cid:70)(cid:75)(cid:34) (cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:23)(cid:29) (cid:57)(cid:76)(cid:86)(cid:88)(cid:68)(cid:79)(cid:76)(cid:93)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:49)(cid:39)(cid:51)(cid:53) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:17) (cid:55)(cid:75)(cid:72) (cid:85)(cid:72)(cid:71) (cid:83)(cid:68)(cid:87)(cid:87)(cid:72)(cid:85)(cid:81) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:79)(cid:72)(cid:73)(cid:87) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:87)(cid:75)(cid:72) (cid:71)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:69)(cid:79)(cid:88)(cid:72) (cid:83)(cid:68)(cid:87)(cid:87)(cid:72)(cid:85)(cid:81) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:85)(cid:76)(cid:74)(cid:75)(cid:87) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:87)(cid:75)(cid:72) (cid:71)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:39)(cid:68)(cid:85)(cid:78)(cid:72)(cid:85) (cid:70)(cid:82)(cid:79)(cid:82)(cid:85) (cid:76)(cid:81)(cid:71)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72)(cid:86) (cid:75)(cid:76)(cid:74)(cid:75)(cid:72)(cid:85) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:90)(cid:72)(cid:76)(cid:74)(cid:75)(cid:87)(cid:17) (cid:68)(cid:69)(cid:86)(cid:87)(cid:85)(cid:68)(cid:70)(cid:87) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:15) (cid:85)(cid:72)(cid:86)(cid:83)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:79)(cid:92)(cid:17) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:21) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:87)(cid:75)(cid:72) (cid:41)(cid:16)(cid:86)(cid:70)(cid:82)(cid:85)(cid:72)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:89)(cid:68)(cid:85)(cid:76)(cid:68)(cid:81)(cid:87)(cid:86) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:73)(cid:82)(cid:85) (cid:72)(cid:68)(cid:70)(cid:75) (cid:87)(cid:92)(cid:83)(cid:72) (cid:82)(cid:73) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:72)(cid:81)(cid:87)(cid:76)(cid:85)(cid:72) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:54)(cid:48)(cid:54) (cid:87)(cid:72)(cid:86)(cid:87) (cid:86)(cid:72)(cid:87)(cid:17) (cid:55)(cid:75)(cid:72) (cid:69)(cid:72)(cid:86)(cid:87) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:68)(cid:80)(cid:82)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72)(cid:86)(cid:72) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:89)(cid:68)(cid:85)(cid:76)(cid:68)(cid:81)(cid:87)(cid:86) (cid:68)(cid:85)(cid:72) (cid:76)(cid:81) (cid:69)(cid:82)(cid:79)(cid:71)(cid:73)(cid:68)(cid:70)(cid:72)(cid:15) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:69)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:69)(cid:72)(cid:16) (cid:87)(cid:90)(cid:72)(cid:72)(cid:81) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:54) (cid:68)(cid:81)(cid:71) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:58) (cid:68)(cid:85)(cid:72) (cid:88)(cid:81)(cid:71)(cid:72)(cid:85)(cid:79)(cid:76)(cid:81)(cid:72)(cid:71)(cid:17) (cid:58)(cid:72) (cid:70)(cid:68)(cid:81) (cid:86)(cid:72)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:73)(cid:82)(cid:85) (cid:68)(cid:79)(cid:79) (cid:87)(cid:92)(cid:83)(cid:72)(cid:86) (cid:82)(cid:73) (cid:70)(cid:82)(cid:81)(cid:70)(cid:85)(cid:72)(cid:87)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:15) (cid:87)(cid:75)(cid:72) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:58) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:86) (cid:87)(cid:75)(cid:72) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:54) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:69)(cid:92) (cid:68) (cid:86)(cid:76)(cid:74)(cid:81)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:81)(cid:87) (cid:80)(cid:68)(cid:85)(cid:74)(cid:76)(cid:81)(cid:17) (cid:44)(cid:81) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:79)(cid:15) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:70)(cid:82)(cid:81)(cid:70)(cid:85)(cid:72)(cid:87)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:86) (cid:88)(cid:86)(cid:88)(cid:68)(cid:79)(cid:79)(cid:92) (cid:85)(cid:72)(cid:68)(cid:79)(cid:76)(cid:93)(cid:72)(cid:71) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:73)(cid:82)(cid:85)(cid:80) (cid:82)(cid:73) (cid:68) (cid:83)(cid:75)(cid:85)(cid:68)(cid:86)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:70)(cid:82)(cid:81)(cid:86)(cid:76)(cid:86)(cid:87)(cid:86) (cid:82)(cid:73) (cid:82)(cid:81)(cid:72) (cid:82)(cid:85) (cid:80)(cid:82)(cid:85)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71)(cid:15) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72)(cid:92) (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:68)(cid:70)(cid:70)(cid:88)(cid:85)(cid:68)(cid:87)(cid:72)(cid:79)(cid:92) (cid:70)(cid:68)(cid:83)(cid:87)(cid:88)(cid:85)(cid:72)(cid:71) (cid:69)(cid:92) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:55)(cid:75)(cid:72) (cid:49)(cid:39)(cid:51)(cid:53) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:87)(cid:75)(cid:68)(cid:87) (cid:76)(cid:81)(cid:16) (cid:70)(cid:82)(cid:85)(cid:83)(cid:82)(cid:85)(cid:68)(cid:87)(cid:72)(cid:86) (cid:69)(cid:82)(cid:87)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:81)(cid:71) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:16) (cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:88)(cid:85)(cid:87)(cid:75)(cid:72)(cid:85) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:86) (cid:88)(cid:83)(cid:82)(cid:81) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:58)(cid:15) (cid:90)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72) (cid:79)(cid:82)(cid:81)(cid:72) (cid:72)(cid:91)(cid:70)(cid:72)(cid:83)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:11)(cid:179)(cid:87)(cid:75)(cid:72)(cid:92)(cid:180)(cid:12)(cid:17) (cid:58)(cid:72) (cid:69)(cid:72)(cid:79)(cid:76)(cid:72)(cid:89)(cid:72) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:68)(cid:86)(cid:82)(cid:81) (cid:76)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:72)(cid:81)(cid:70)(cid:82)(cid:71)(cid:72)(cid:85) (cid:70)(cid:68)(cid:81)(cid:81)(cid:82)(cid:87) (cid:68)(cid:71)(cid:16) (cid:72)(cid:84)(cid:88)(cid:68)(cid:87)(cid:72)(cid:79)(cid:92) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:87)(cid:72)(cid:85)(cid:68)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:69)(cid:72)(cid:87)(cid:90)(cid:72)(cid:72)(cid:81) (cid:87)(cid:75)(cid:72) (cid:80)(cid:88)(cid:79)(cid:16) (cid:87)(cid:76)(cid:83)(cid:79)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87)(cid:86) (cid:73)(cid:82)(cid:85) (cid:83)(cid:79)(cid:88)(cid:85)(cid:68)(cid:79) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:79)(cid:76)(cid:78)(cid:72) (cid:68)(cid:81)(cid:71) (cid:82)(cid:81)(cid:79)(cid:92) (cid:68)(cid:71)(cid:71) (cid:81)(cid:82)(cid:76)(cid:86)(cid:72) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:43)(cid:82)(cid:90)(cid:72)(cid:89)(cid:72)(cid:85)(cid:15) (cid:87)(cid:75)(cid:76)(cid:86) (cid:82)(cid:69)(cid:86)(cid:72)(cid:85)(cid:89)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:71)(cid:82)(cid:72)(cid:86) (cid:81)(cid:82)(cid:87) (cid:75)(cid:82)(cid:79)(cid:71) (cid:73)(cid:82)(cid:85) (cid:68)(cid:69)(cid:16) (cid:86)(cid:87)(cid:85)(cid:68)(cid:70)(cid:87) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:58) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:86) (cid:70)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:68)(cid:69)(cid:79)(cid:92) (cid:82)(cid:85) (cid:86)(cid:79)(cid:76)(cid:74)(cid:75)(cid:87)(cid:79)(cid:92) (cid:90)(cid:82)(cid:85)(cid:86)(cid:72) (cid:82)(cid:81) (cid:73)(cid:82)(cid:88)(cid:85) (cid:82)(cid:88)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:89)(cid:72) (cid:87)(cid:92)(cid:83)(cid:72)(cid:86) (cid:82)(cid:73) (cid:68)(cid:69)(cid:86)(cid:87)(cid:85)(cid:68)(cid:70)(cid:87) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:76)(cid:86) (cid:70)(cid:82)(cid:81)(cid:86)(cid:76)(cid:86)(cid:87)(cid:72)(cid:81)(cid:87) (cid:90)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72) (cid:73)(cid:68)(cid:70)(cid:87) (cid:87)(cid:75)(cid:68)(cid:87) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:68)(cid:69)(cid:86)(cid:87)(cid:85)(cid:68)(cid:70)(cid:87) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:3)(cid:29) (cid:42)(cid:82)(cid:79)(cid:71)(cid:3)(cid:29) (cid:49)(cid:39)(cid:51)(cid:53)(cid:3)(cid:29) (cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:3)(cid:29) (cid:42)(cid:82)(cid:79)(cid:71)(cid:3)(cid:29) (cid:49)(cid:39)(cid:51)(cid:53)(cid:3)(cid:29) (cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:3)(cid:29) (cid:42)(cid:82)(cid:79)(cid:71)(cid:3)(cid:29) (cid:49)(cid:39)(cid:51)(cid:53)(cid:29) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72)(cid:3)(cid:54)(cid:48)(cid:54) (cid:55)(cid:38)(cid:3)(cid:82)(cid:73)(cid:3)(cid:50)(cid:81)(cid:87)(cid:82)(cid:49)(cid:82)(cid:87)(cid:72)(cid:86) (cid:42)(cid:82)(cid:79)(cid:71)(cid:3)(cid:29)(cid:3)(cid:3) (cid:49)(cid:39)(cid:51)(cid:53)(cid:3)(cid:29) (cid:37)(cid:68)(cid:76)(cid:71)(cid:88)(cid:61)(cid:75)(cid:76)(cid:71)(cid:68)(cid:82) (cid:36)(cid:29)(cid:3)(cid:3) (cid:15923)(cid:3)(cid:12250)(cid:3)(cid:16693)(cid:15883)(cid:3)(cid:14020)(cid:3)(cid:10964)(cid:3)(cid:11934)(cid:3)(cid:14027)(cid:3)(cid:16651)(cid:3)(cid:10178)(cid:12467)(cid:3)(cid:14009)(cid:3)(cid:7618) (cid:36)(cid:29)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:44)(cid:3)(cid:87)(cid:75)(cid:82)(cid:88)(cid:74)(cid:75)(cid:87)(cid:3)(cid:92)(cid:82)(cid:88)(cid:3)(cid:90)(cid:82)(cid:88)(cid:79)(cid:71)(cid:3)(cid:80)(cid:68)(cid:78)(cid:72)(cid:3)(cid:68)(cid:3)(cid:80)(cid:82)(cid:89)(cid:72)(cid:3)(cid:71)(cid:88)(cid:85)(cid:76)(cid:81)(cid:74)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:54)(cid:83)(cid:85)(cid:76)(cid:81)(cid:74)(cid:3)(cid:41)(cid:72)(cid:86)(cid:87)(cid:76)(cid:89)(cid:68)(cid:79)(cid:17) (cid:36)(cid:29)(cid:3) (cid:3)(cid:15923)(cid:3)(cid:16068)(cid:15772)(cid:3)(cid:14020)(cid:3)(cid:14013)(cid:3)(cid:17083)(cid:3)(cid:10313)(cid:14984)(cid:3)(cid:11678)(cid:12701)(cid:3)(cid:17267)(cid:3)(cid:11735)(cid:3)(cid:11488)(cid:17652)(cid:7236) (cid:36)(cid:29)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:44)(cid:3)(cid:75)(cid:82)(cid:83)(cid:72)(cid:3)(cid:92)(cid:82)(cid:88)(cid:3)(cid:70)(cid:68)(cid:81)(cid:3)(cid:85)(cid:72)(cid:81)(cid:87)(cid:3)(cid:68)(cid:3)(cid:75)(cid:82)(cid:88)(cid:86)(cid:72)(cid:3)(cid:81)(cid:72)(cid:68)(cid:85)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:81)(cid:72)(cid:90)(cid:86)(cid:83)(cid:68)(cid:83)(cid:72)(cid:85)(cid:3)(cid:82)(cid:73)(cid:73)(cid:76)(cid:70)(cid:72)(cid:17) (cid:37)(cid:29) (cid:19366)(cid:3)(cid:7629)(cid:3)(cid:15923)(cid:3)(cid:14013)(cid:3)(cid:12660)(cid:14960)(cid:3)(cid:17679)(cid:3)(cid:12068)(cid:3)(cid:13420)(cid:7236) (cid:37)(cid:29)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:58)(cid:72)(cid:79)(cid:79)(cid:15)(cid:3)(cid:76)(cid:87)(cid:3)(cid:76)(cid:86)(cid:3)(cid:69)(cid:72)(cid:86)(cid:87)(cid:3)(cid:76)(cid:73)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:81)(cid:72)(cid:90)(cid:3)(cid:75)(cid:82)(cid:88)(cid:86)(cid:72)(cid:3)(cid:76)(cid:86)(cid:3)(cid:85)(cid:72)(cid:68)(cid:71)(cid:92)(cid:17) (cid:37)(cid:29)(cid:3) (cid:11)(cid:72)(cid:89)(cid:72)(cid:81)(cid:87)(cid:12)(cid:3)(cid:10524)(cid:3)(cid:16305)(cid:3)(cid:12846)(cid:3)(cid:17436)(cid:3)(cid:14013)(cid:3)(cid:10178)(cid:3)(cid:12281)(cid:14726)(cid:3)(cid:13420)(cid:7236) (cid:37)(cid:29)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:44)(cid:3)(cid:75)(cid:68)(cid:89)(cid:72)(cid:3)(cid:87)(cid:82)(cid:3)(cid:80)(cid:82)(cid:89)(cid:72)(cid:3)(cid:69)(cid:68)(cid:70)(cid:78)(cid:3)(cid:76)(cid:73)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:81)(cid:72)(cid:90)(cid:3)(cid:75)(cid:82)(cid:88)(cid:86)(cid:72)(cid:3)(cid:76)(cid:86)(cid:3)(cid:81)(cid:82)(cid:87)(cid:3)(cid:68)(cid:70)(cid:70)(cid:72)(cid:86)(cid:86)(cid:76)(cid:69)(cid:79)(cid:72)(cid:17) (cid:37)(cid:29) (cid:3)(cid:11)(cid:15923)(cid:12)(cid:3)(cid:10524)(cid:3)(cid:16305)(cid:3)(cid:12846)(cid:3)(cid:17436)(cid:3)(cid:14013)(cid:3)(cid:10178)(cid:3)(cid:12281)(cid:14726)(cid:3)(cid:13420)(cid:7236) (cid:37)(cid:29)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:44)(cid:3)(cid:75)(cid:68)(cid:89)(cid:72)(cid:3)(cid:87)(cid:82)(cid:3)(cid:80)(cid:82)(cid:89)(cid:72)(cid:3)(cid:69)(cid:68)(cid:70)(cid:78)(cid:3)(cid:76)(cid:73)(cid:3)(cid:44)(cid:3)(cid:70)(cid:68)(cid:81)(cid:3)(cid:81)(cid:82)(cid:87)(cid:17) (cid:36)(cid:29)(cid:3) (cid:3)(cid:15923)(cid:3)(cid:13034)(cid:16693)(cid:3)(cid:10932)(cid:14960)(cid:12036)(cid:3)(cid:17426)(cid:11498)(cid:3)(cid:11102)(cid:3)(cid:14151)(cid:17053)(cid:3)(cid:13649)(cid:3)(cid:7648) (cid:36)(cid:29)(cid:3)(cid:3) (cid:3)(cid:3)(cid:38)(cid:68)(cid:81)(cid:3)(cid:44)(cid:3)(cid:87)(cid:68)(cid:78)(cid:72)(cid:3)(cid:68)(cid:3)(cid:71)(cid:76)(cid:85)(cid:72)(cid:70)(cid:87)(cid:3)(cid:73)(cid:79)(cid:76)(cid:74)(cid:75)(cid:87)(cid:3)(cid:73)(cid:85)(cid:82)(cid:80)(cid:3)(cid:54)(cid:75)(cid:68)(cid:81)(cid:74)(cid:75)(cid:68)(cid:76)(cid:3)(cid:87)(cid:82)(cid:3)(cid:49)(cid:72)(cid:90)(cid:3)(cid:60)(cid:82)(cid:85)(cid:78)(cid:34) (cid:37)(cid:29)(cid:3) (cid:3)(cid:11)(cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:76)(cid:70)(cid:12)(cid:3)(cid:17426)(cid:11498)(cid:3)(cid:11293)(cid:3)(cid:13788)(cid:16881) (cid:3) (cid:37)(cid:29)(cid:3)(cid:3) (cid:3)(cid:3)(cid:3)(cid:55)(cid:75)(cid:72)(cid:85)(cid:72)(cid:3)(cid:76)(cid:86)(cid:3)(cid:81)(cid:82)(cid:3)(cid:71)(cid:76)(cid:85)(cid:72)(cid:70)(cid:87)(cid:3)(cid:73)(cid:79)(cid:76)(cid:74)(cid:75)(cid:87)(cid:17) (cid:37)(cid:29)(cid:3) (cid:3) (cid:3) (cid:11)(cid:15923)(cid:12)(cid:3)(cid:17426)(cid:11498)(cid:3)(cid:11293)(cid:3)(cid:13788)(cid:16881)(cid:7236) (cid:3) (cid:37)(cid:29)(cid:3)(cid:3) (cid:3)(cid:3)(cid:44)(cid:3)(cid:70)(cid:68)(cid:81)(cid:3)(cid:81)(cid:82)(cid:87)(cid:3)(cid:73)(cid:79)(cid:76)(cid:74)(cid:75)(cid:87)(cid:3)(cid:71)(cid:76)(cid:85)(cid:72)(cid:70)(cid:87)(cid:79)(cid:92)(cid:17) (cid:11)(cid:14020)(cid:12)(cid:3)(cid:16881)(cid:13046)(cid:3)(cid:7629)(cid:3)(cid:11)(cid:14020)(cid:12)(cid:3)(cid:16881)(cid:13046)(cid:3)(cid:11)(cid:14020)(cid:12)(cid:3)(cid:13034)(cid:14013)(cid:3)(cid:12250)(cid:12290)(cid:3)(cid:11102)(cid:3)(cid:10322)(cid:12810)(cid:3)(cid:14726)(cid:3)(cid:7629)(cid:3)(cid:15144)(cid:10154)(cid:3)(cid:7648) (cid:3) (cid:60)(cid:82)(cid:88)(cid:3)(cid:80)(cid:68)(cid:92)(cid:3)(cid:74)(cid:82)(cid:3)(cid:87)(cid:82)(cid:3)(cid:37)(cid:72)(cid:76)(cid:77)(cid:76)(cid:81)(cid:74)(cid:3)(cid:68)(cid:74)(cid:68)(cid:76)(cid:81)(cid:3)(cid:90)(cid:75)(cid:72)(cid:81)(cid:3)(cid:92)(cid:82)(cid:88)(cid:3)(cid:68)(cid:85)(cid:72)(cid:3)(cid:73)(cid:85)(cid:72)(cid:72)(cid:15)(cid:3)(cid:92)(cid:82)(cid:88)(cid:3)(cid:68)(cid:85)(cid:72)(cid:3)(cid:73)(cid:85)(cid:72)(cid:72)(cid:15)(cid:3)(cid:85)(cid:76)(cid:74)(cid:75)(cid:87)(cid:34) (cid:11)(cid:14020)(cid:12)(cid:3)(cid:16881)(cid:13046)(cid:3)(cid:7629)(cid:3)(cid:11)(cid:15923)(cid:12)(cid:3)(cid:16881)(cid:13046)(cid:3)(cid:11)(cid:15923)(cid:12)(cid:3)(cid:13034)(cid:14013)(cid:3)(cid:12250)(cid:12290)(cid:3)(cid:11102)(cid:3)(cid:10322)(cid:12810)(cid:3)(cid:14726)(cid:3)(cid:7629)(cid:3)(cid:15144)(cid:10154)(cid:3)(cid:7648) (cid:3) (cid:44)(cid:3)(cid:80)(cid:68)(cid:92)(cid:3)(cid:74)(cid:82)(cid:3)(cid:87)(cid:82)(cid:3)(cid:37)(cid:72)(cid:76)(cid:77)(cid:76)(cid:81)(cid:74)(cid:3)(cid:68)(cid:74)(cid:68)(cid:76)(cid:81)(cid:3)(cid:90)(cid:75)(cid:72)(cid:81)(cid:3)(cid:92)(cid:82)(cid:88)(cid:3)(cid:68)(cid:85)(cid:72)(cid:3)(cid:73)(cid:85)(cid:72)(cid:72)(cid:15)(cid:3)(cid:44)(cid:3)(cid:68)(cid:80)(cid:3)(cid:73)(cid:85)(cid:72)(cid:72)(cid:15)(cid:3)(cid:85)(cid:76)(cid:74)(cid:75)(cid:87)(cid:34) (cid:36)(cid:29)(cid:3)(cid:3) (cid:16670)(cid:17051)(cid:3)(cid:13003)(cid:3)(cid:11109)(cid:3)(cid:13992)(cid:16266)(cid:3)(cid:15724)(cid:14966)(cid:16650)(cid:3)(cid:16881)(cid:3)(cid:16868)(cid:3)(cid:13649)(cid:3)(cid:7648) (cid:36)(cid:29)(cid:3)(cid:3) (cid:36)(cid:85)(cid:72)(cid:3)(cid:87)(cid:75)(cid:82)(cid:86)(cid:72)(cid:3)(cid:68)(cid:81)(cid:87)(cid:76)(cid:83)(cid:92)(cid:85)(cid:72)(cid:87)(cid:76)(cid:70)(cid:86)(cid:3)(cid:83)(cid:85)(cid:72)(cid:86)(cid:70)(cid:85)(cid:76)(cid:69)(cid:72)(cid:71)(cid:3)(cid:69)(cid:92)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:75)(cid:82)(cid:86)(cid:83)(cid:76)(cid:87)(cid:68)(cid:79)(cid:3)(cid:88)(cid:86)(cid:72)(cid:73)(cid:88)(cid:79)(cid:34) (cid:37)(cid:29)(cid:3)(cid:3) (cid:11)(cid:15389)(cid:13800)(cid:12)(cid:3)(cid:11293)(cid:3)(cid:16881)(cid:3)(cid:15123)(cid:13781)(cid:3)(cid:10730)(cid:11511)(cid:3)(cid:7648) (cid:37)(cid:29)(cid:3)(cid:3) (cid:58)(cid:75)(cid:68)(cid:87)(cid:3)(cid:68)(cid:85)(cid:72)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:76)(cid:81)(cid:74)(cid:85)(cid:72)(cid:71)(cid:76)(cid:72)(cid:81)(cid:87)(cid:86)(cid:3)(cid:82)(cid:73)(cid:3)(cid:87)(cid:75)(cid:72)(cid:80)(cid:34) (cid:37)(cid:29)(cid:3)(cid:3) (cid:11)(cid:15389)(cid:12)(cid:3)(cid:11293)(cid:3)(cid:16881)(cid:3)(cid:15123)(cid:13781)(cid:3)(cid:10730)(cid:11511)(cid:3)(cid:7648) (cid:37)(cid:29)(cid:3)(cid:3) (cid:3)(cid:3)(cid:58)(cid:75)(cid:68)(cid:87)(cid:3)(cid:68)(cid:85)(cid:72)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:76)(cid:81)(cid:74)(cid:85)(cid:72)(cid:71)(cid:76)(cid:72)(cid:81)(cid:87)(cid:86)(cid:3)(cid:82)(cid:73)(cid:3)(cid:76)(cid:87)(cid:34) (cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:24)(cid:29) (cid:40)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72) (cid:72)(cid:85)(cid:85)(cid:82)(cid:85)(cid:86) (cid:80)(cid:68)(cid:71)(cid:72) (cid:69)(cid:92) (cid:49)(cid:39)(cid:51)(cid:53) (cid:82)(cid:81) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:86)(cid:17) (cid:79)(cid:76)(cid:78)(cid:72) (cid:179)(cid:40)(cid:89)(cid:72)(cid:81)(cid:87)(cid:180) (cid:76)(cid:86) (cid:82)(cid:73)(cid:87)(cid:72)(cid:81) (cid:68)(cid:81) (cid:72)(cid:81)(cid:87)(cid:76)(cid:85)(cid:72) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:17) (cid:55)(cid:75)(cid:72) (cid:73)(cid:88)(cid:79)(cid:79) (cid:49)(cid:39)(cid:51)(cid:53) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:86)(cid:87)(cid:76)(cid:79)(cid:79) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:86) (cid:69)(cid:82)(cid:87)(cid:75) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:58) (cid:68)(cid:81)(cid:71) (cid:49)(cid:39)(cid:51)(cid:53)(cid:16)(cid:54) (cid:73)(cid:82)(cid:85) (cid:68)(cid:79)(cid:79) (cid:68)(cid:69)(cid:86)(cid:87)(cid:85)(cid:68)(cid:70)(cid:87) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:17) (cid:24)(cid:17)(cid:22) (cid:36)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:57)(cid:76)(cid:86)(cid:88)(cid:68)(cid:79)(cid:76)(cid:93)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:44)(cid:81)(cid:87)(cid:88)(cid:76)(cid:87)(cid:76)(cid:89)(cid:72)(cid:79)(cid:92)(cid:15) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:16) (cid:87)(cid:72)(cid:85)(cid:68)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:69)(cid:72)(cid:87)(cid:90)(cid:72)(cid:72)(cid:81) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:68)(cid:81)(cid:71) (cid:72)(cid:68)(cid:70)(cid:75) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72)(cid:17) (cid:55)(cid:75)(cid:72) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:16) (cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:86)(cid:75)(cid:82)(cid:88)(cid:79)(cid:71) (cid:85)(cid:72)(cid:70)(cid:72)(cid:76)(cid:89)(cid:72) (cid:80)(cid:82)(cid:85)(cid:72) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:54)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85)(cid:79)(cid:92)(cid:15) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:87)(cid:72)(cid:85)(cid:68)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:69)(cid:72)(cid:87)(cid:90)(cid:72)(cid:72)(cid:81) (cid:72)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:17) (cid:55)(cid:75)(cid:88)(cid:86) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:71)(cid:72)(cid:86)(cid:70)(cid:85)(cid:76)(cid:69)(cid:72) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:86)(cid:75)(cid:82)(cid:88)(cid:79)(cid:71) (cid:85)(cid:72)(cid:70)(cid:72)(cid:76)(cid:89)(cid:72) (cid:80)(cid:82)(cid:85)(cid:72) (cid:68)(cid:87)(cid:16) (cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:73) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:90)(cid:82)(cid:85)(cid:78)(cid:86) (cid:68)(cid:86) (cid:76)(cid:81)(cid:87)(cid:72)(cid:81)(cid:71)(cid:72)(cid:71)(cid:17) (cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:23) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:68) (cid:76)(cid:81)(cid:86)(cid:87)(cid:68)(cid:81)(cid:70)(cid:72) (cid:82)(cid:73) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:71)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:83)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:72)(cid:71) (cid:69)(cid:92) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:17) (cid:44)(cid:81) (cid:87)(cid:75)(cid:76)(cid:86) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:70)(cid:82)(cid:85)(cid:85)(cid:72)(cid:70)(cid:87)(cid:79)(cid:92) (cid:74)(cid:76)(cid:89)(cid:72)(cid:86) (cid:75)(cid:76)(cid:74)(cid:75)(cid:72)(cid:85) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:90)(cid:72)(cid:76)(cid:74)(cid:75)(cid:87)(cid:86) (cid:87)(cid:82) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:76)(cid:81)(cid:71)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81)(cid:17) (cid:36)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71) (cid:79)(cid:72)(cid:89)(cid:72)(cid:79)(cid:15) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:68)(cid:79)(cid:86)(cid:82) (cid:74)(cid:76)(cid:89)(cid:72)(cid:86) (cid:75)(cid:76)(cid:74)(cid:75)(cid:72)(cid:85) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:90)(cid:72)(cid:76)(cid:74)(cid:75)(cid:87)(cid:86) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:86)(cid:83)(cid:72)(cid:70)(cid:76)(cid:73)(cid:76)(cid:70) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:76)(cid:81)(cid:71)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:11)(cid:179)(cid:86)(cid:75)(cid:72)(cid:180)(cid:12) (cid:68)(cid:81)(cid:71) (cid:11)(cid:179)(cid:71)(cid:68)(cid:88)(cid:74)(cid:75)(cid:87)(cid:72)(cid:85)(cid:180)(cid:12) (cid:75)(cid:68)(cid:89)(cid:72) (cid:85)(cid:72)(cid:70)(cid:72)(cid:76)(cid:89)(cid:72)(cid:71) (cid:75)(cid:76)(cid:74)(cid:75)(cid:72)(cid:85) (cid:90)(cid:72)(cid:76)(cid:74)(cid:75)(cid:87)(cid:86)(cid:15) (cid:68)(cid:86) (cid:76)(cid:81)(cid:71)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72)(cid:71) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72) (cid:71)(cid:68)(cid:85)(cid:78)(cid:72)(cid:85) (cid:70)(cid:82)(cid:79)(cid:82)(cid:85)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:86)(cid:88)(cid:74)(cid:74)(cid:72)(cid:86)(cid:87)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:76)(cid:81) (cid:87)(cid:75)(cid:76)(cid:86) (cid:70)(cid:68)(cid:86)(cid:72) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:71)(cid:86) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:85)(cid:76)(cid:74)(cid:75)(cid:87) (cid:88)(cid:87)(cid:87)(cid:72)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:90)(cid:82)(cid:85)(cid:78)(cid:86) (cid:68)(cid:86) (cid:90)(cid:72) (cid:75)(cid:68)(cid:89)(cid:72) (cid:72)(cid:91)(cid:83)(cid:72)(cid:70)(cid:87)(cid:72)(cid:71)(cid:17) 900 (cid:24)(cid:17)(cid:23) (cid:40)(cid:85)(cid:85)(cid:82)(cid:85) (cid:36)(cid:81)(cid:68)(cid:79)(cid:92)(cid:86)(cid:76)(cid:86) (cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:24) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:86)(cid:82)(cid:80)(cid:72) (cid:87)(cid:92)(cid:83)(cid:76)(cid:70)(cid:68)(cid:79) (cid:80)(cid:76)(cid:86)(cid:87)(cid:68)(cid:78)(cid:72)(cid:86) (cid:80)(cid:68)(cid:71)(cid:72) (cid:69)(cid:92) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:82)(cid:81) (cid:72)(cid:68)(cid:70)(cid:75) (cid:74)(cid:72)(cid:81)(cid:85)(cid:72)(cid:17) (cid:41)(cid:82)(cid:85) (cid:87)(cid:75)(cid:72) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:54)(cid:48)(cid:54) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:15) (cid:87)(cid:75)(cid:72) (cid:71)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:87)(cid:92)(cid:83)(cid:72)(cid:86) (cid:82)(cid:73) (cid:83)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:76)(cid:86) (cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72)(cid:79)(cid:92) (cid:69)(cid:68)(cid:79)(cid:68)(cid:81)(cid:70)(cid:72)(cid:71) (cid:11)(cid:54)(cid:72)(cid:72) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:22)(cid:12)(cid:15) (cid:68)(cid:81)(cid:71) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:71)(cid:82)(cid:72)(cid:86) (cid:68) (cid:69)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:77)(cid:82)(cid:69) (cid:82)(cid:81) (cid:70)(cid:82)(cid:81)(cid:70)(cid:85)(cid:72)(cid:87)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:69)(cid:88)(cid:87) (cid:86)(cid:87)(cid:88)(cid:80)(cid:69)(cid:79)(cid:72)(cid:86) (cid:82)(cid:81) (cid:68)(cid:69)(cid:86)(cid:87)(cid:85)(cid:68)(cid:70)(cid:87) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:79)(cid:76)(cid:78)(cid:72) (cid:179)(cid:72)(cid:89)(cid:72)(cid:81)(cid:87)(cid:180) (cid:68)(cid:81)(cid:71) (cid:179)(cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:76)(cid:70)(cid:180) (cid:68)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90)(cid:81) (cid:76)(cid:81) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:21) (cid:68)(cid:86) (cid:76)(cid:87) (cid:76)(cid:86) (cid:75)(cid:68)(cid:85)(cid:71)(cid:72)(cid:85) (cid:87)(cid:82) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:71) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:85)(cid:85)(cid:72)(cid:70)(cid:87) (cid:83)(cid:68)(cid:85)(cid:87)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72)(cid:86)(cid:72) (cid:70)(cid:68)(cid:86)(cid:72)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72) (cid:55)(cid:38) (cid:71)(cid:68)(cid:87)(cid:68) (cid:82)(cid:73) (cid:50)(cid:81)(cid:87)(cid:82)(cid:49)(cid:82)(cid:87)(cid:72)(cid:86) (cid:76)(cid:86) (cid:68) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:70)(cid:85)(cid:76)(cid:83)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:72)(cid:79)(cid:72)(cid:83)(cid:75)(cid:82)(cid:81)(cid:72) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72)(cid:85)(cid:72) (cid:68)(cid:85)(cid:72) (cid:82)(cid:73)(cid:87)(cid:72)(cid:81) (cid:85)(cid:72)(cid:83)(cid:16) (cid:72)(cid:87)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:15) (cid:68)(cid:81)(cid:71) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:86)(cid:87)(cid:85)(cid:88)(cid:74)(cid:74)(cid:79)(cid:72)(cid:86) (cid:90)(cid:75)(cid:72)(cid:81) (cid:83)(cid:72)(cid:82)(cid:83)(cid:79)(cid:72) (cid:85)(cid:72)(cid:16) (cid:83)(cid:72)(cid:68)(cid:87) (cid:90)(cid:75)(cid:68)(cid:87) (cid:87)(cid:75)(cid:72)(cid:92) (cid:86)(cid:68)(cid:76)(cid:71)(cid:17) (cid:44)(cid:73) (cid:87)(cid:75)(cid:72) (cid:86)(cid:68)(cid:80)(cid:72) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:86) (cid:80)(cid:72)(cid:81)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:72)(cid:71) (cid:85)(cid:72)(cid:83)(cid:72)(cid:68)(cid:87)(cid:72)(cid:71)(cid:79)(cid:92)(cid:15) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:70)(cid:68)(cid:81) (cid:81)(cid:82)(cid:87) (cid:70)(cid:68)(cid:83)(cid:87)(cid:88)(cid:85)(cid:72) (cid:87)(cid:75)(cid:76)(cid:86) (cid:76)(cid:81)(cid:87)(cid:72)(cid:85)(cid:68)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:86)(cid:76)(cid:81)(cid:70)(cid:72) (cid:72)(cid:68)(cid:70)(cid:75) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:76)(cid:86) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85) (cid:76)(cid:81)(cid:71)(cid:72)(cid:83)(cid:72)(cid:81)(cid:16) (cid:71)(cid:72)(cid:81)(cid:87)(cid:79)(cid:92)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:86)(cid:88)(cid:74)(cid:74)(cid:72)(cid:86)(cid:87)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:81)(cid:72) (cid:73)(cid:88)(cid:87)(cid:88)(cid:85)(cid:72) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87) (cid:80)(cid:76)(cid:74)(cid:75)(cid:87) (cid:76)(cid:81)(cid:89)(cid:82)(cid:79)(cid:89)(cid:72) (cid:88)(cid:86)(cid:76)(cid:81)(cid:74) (cid:68) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:71)(cid:72)(cid:70)(cid:82)(cid:71)(cid:72)(cid:85)(cid:17) (cid:44)(cid:81) (cid:87)(cid:75)(cid:72) (cid:37)(cid:68)(cid:76)(cid:71)(cid:88)(cid:61)(cid:75)(cid:76)(cid:71)(cid:68)(cid:82) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:15) (cid:82)(cid:81)(cid:79)(cid:92) (cid:70)(cid:82)(cid:81)(cid:70)(cid:85)(cid:72)(cid:87)(cid:72) (cid:83)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:68)(cid:85)(cid:72) (cid:68)(cid:81)(cid:81)(cid:82)(cid:87)(cid:68)(cid:87)(cid:72)(cid:71) (cid:68)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90)(cid:81) (cid:76)(cid:81) (cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:22)(cid:17) (cid:51)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:79)(cid:76)(cid:78)(cid:72) (cid:11)(cid:179)(cid:76)(cid:87)(cid:180)(cid:12) (cid:68)(cid:81)(cid:71) (cid:11)(cid:179)(cid:87)(cid:75)(cid:72)(cid:92)(cid:180)(cid:12) (cid:68)(cid:70)(cid:70)(cid:82)(cid:88)(cid:81)(cid:87) (cid:73)(cid:82)(cid:85) (cid:68) (cid:79)(cid:68)(cid:85)(cid:74)(cid:72) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:85)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:68)(cid:79)(cid:79) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:87)(cid:75)(cid:72)(cid:86)(cid:72) (cid:87)(cid:90)(cid:82) (cid:70)(cid:68)(cid:87)(cid:72)(cid:74)(cid:82)(cid:85)(cid:76)(cid:72)(cid:86)(cid:15) (cid:87)(cid:75)(cid:72) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:81)(cid:70)(cid:72) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:76)(cid:86) (cid:75)(cid:76)(cid:87)(cid:16)(cid:68)(cid:81)(cid:71)(cid:16)(cid:80)(cid:76)(cid:86)(cid:86)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:68)(cid:87)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:72)(cid:71) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:68)(cid:69)(cid:86)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:73) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:70)(cid:82)(cid:80)(cid:80)(cid:82)(cid:81) (cid:86)(cid:72)(cid:81)(cid:86)(cid:72)(cid:17) (cid:25) (cid:38)(cid:82)(cid:81)(cid:70)(cid:79)(cid:88)(cid:86)(cid:76)(cid:82)(cid:81)(cid:86) (cid:58)(cid:72) (cid:75)(cid:68)(cid:89)(cid:72) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:68)(cid:81) (cid:72)(cid:81)(cid:71)(cid:16)(cid:87)(cid:82)(cid:16)(cid:72)(cid:81)(cid:71) (cid:81)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78) (cid:68)(cid:85)(cid:70)(cid:75)(cid:76)(cid:87)(cid:72)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:68)(cid:87)(cid:87)(cid:72)(cid:80)(cid:83)(cid:87)(cid:86) (cid:87)(cid:82) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:87)(cid:72)(cid:85)(cid:68)(cid:70)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81) (cid:69)(cid:72)(cid:87)(cid:90)(cid:72)(cid:72)(cid:81) (cid:87)(cid:75)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:68)(cid:81)(cid:71) (cid:76)(cid:87)(cid:86) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:16) (cid:72)(cid:81)(cid:87) (cid:76)(cid:81) (cid:82)(cid:85)(cid:71)(cid:72)(cid:85) (cid:87)(cid:82) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:16) (cid:81)(cid:72)(cid:86)(cid:72) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:71)(cid:68)(cid:87)(cid:68)(cid:17) (cid:50)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:76)(cid:86) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:81)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:85)(cid:72)(cid:16) (cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:70)(cid:82)(cid:81)(cid:86)(cid:76)(cid:86)(cid:87)(cid:72)(cid:81)(cid:87)(cid:79)(cid:92) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:86) (cid:69)(cid:68)(cid:86)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:90)(cid:75)(cid:72)(cid:81) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72)(cid:71) (cid:82)(cid:81) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:86)(cid:72)(cid:83)(cid:68)(cid:85)(cid:68)(cid:87)(cid:72) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:86)(cid:17) (cid:58)(cid:72) (cid:73)(cid:88)(cid:85)(cid:87)(cid:75)(cid:72)(cid:85) (cid:76)(cid:81)(cid:89)(cid:72)(cid:86)(cid:87)(cid:76)(cid:74)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87)(cid:86) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:69)(cid:92) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:16) (cid:76)(cid:81)(cid:74) (cid:68)(cid:69)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:72)(cid:91)(cid:83)(cid:72)(cid:85)(cid:76)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:68)(cid:81)(cid:71) (cid:71)(cid:72)(cid:80)(cid:82)(cid:81)(cid:86)(cid:87)(cid:85)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:16) (cid:87)(cid:72)(cid:85)(cid:83)(cid:85)(cid:72)(cid:87)(cid:68)(cid:69)(cid:76)(cid:79)(cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:88)(cid:86)(cid:76)(cid:81)(cid:74) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:89)(cid:76)(cid:86)(cid:88)(cid:68)(cid:79)(cid:16) (cid:76)(cid:93)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:36)(cid:70)(cid:78)(cid:81)(cid:82)(cid:90)(cid:79)(cid:72)(cid:71)(cid:74)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:55)(cid:75)(cid:76)(cid:86) (cid:90)(cid:82)(cid:85)(cid:78) (cid:90)(cid:68)(cid:86) (cid:86)(cid:88)(cid:83)(cid:83)(cid:82)(cid:85)(cid:87)(cid:72)(cid:71) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72) (cid:49)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:49)(cid:68)(cid:87)(cid:88)(cid:16) (cid:85)(cid:68)(cid:79) (cid:54)(cid:70)(cid:76)(cid:72)(cid:81)(cid:70)(cid:72) (cid:41)(cid:82)(cid:88)(cid:81)(cid:71)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:38)(cid:75)(cid:76)(cid:81)(cid:68) (cid:11)(cid:49)(cid:82)(cid:17)(cid:25)(cid:20)(cid:26)(cid:19)(cid:21)(cid:19)(cid:23)(cid:26)(cid:15) (cid:49)(cid:82)(cid:17)(cid:25)(cid:20)(cid:22)(cid:19)(cid:19)(cid:19)(cid:27)(cid:19)(cid:12) (cid:68)(cid:81)(cid:71) (cid:37)(cid:72)(cid:76)(cid:77)(cid:76)(cid:81)(cid:74) (cid:49)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:54)(cid:70)(cid:76)(cid:72)(cid:81)(cid:70)(cid:72) (cid:41)(cid:82)(cid:88)(cid:81)(cid:16) (cid:71)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:11)(cid:49)(cid:82)(cid:17)(cid:23)(cid:20)(cid:26)(cid:23)(cid:19)(cid:28)(cid:27)(cid:12)(cid:17) (cid:53)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86) (cid:39)(cid:93)(cid:80)(cid:76)(cid:87)(cid:85)(cid:92) (cid:37)(cid:68)(cid:75)(cid:71)(cid:68)(cid:81)(cid:68)(cid:88)(cid:15) (cid:46)(cid:92)(cid:88)(cid:81)(cid:74)(cid:75)(cid:92)(cid:88)(cid:81) (cid:38)(cid:75)(cid:82)(cid:15) (cid:68)(cid:81)(cid:71) (cid:60) (cid:37)(cid:72)(cid:81)(cid:16) (cid:74)(cid:76)(cid:82)(cid:17) (cid:21)(cid:19)(cid:20)(cid:23)(cid:17) (cid:49)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:80)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:72) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:69)(cid:92) (cid:77)(cid:82)(cid:76)(cid:81)(cid:87)(cid:79)(cid:92) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:87)(cid:82) (cid:68)(cid:79)(cid:76)(cid:74)(cid:81) (cid:68)(cid:81)(cid:71) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:79)(cid:68)(cid:87)(cid:72)(cid:17) (cid:38)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:72)(cid:85) (cid:54)(cid:70)(cid:76)(cid:16) (cid:72)(cid:81)(cid:70)(cid:72)(cid:17) (cid:15) (cid:20)(cid:23)(cid:19)(cid:28)(cid:17) (cid:54)(cid:75)(cid:88) (cid:38)(cid:68)(cid:76)(cid:15) (cid:39)(cid:68)(cid:89)(cid:76)(cid:71) (cid:38)(cid:75)(cid:76)(cid:68)(cid:81)(cid:74)(cid:15) (cid:68)(cid:81)(cid:71) (cid:60)(cid:82)(cid:68)(cid:89) (cid:42)(cid:82)(cid:79)(cid:71)(cid:69)(cid:72)(cid:85)(cid:74)(cid:17) (cid:21)(cid:19)(cid:20)(cid:20)(cid:17) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72)(cid:16)(cid:76)(cid:81)(cid:71)(cid:72)(cid:83)(cid:72)(cid:81)(cid:71)(cid:72)(cid:81)(cid:87) (cid:83)(cid:68)(cid:85)(cid:86)(cid:76)(cid:81)(cid:74) (cid:90)(cid:76)(cid:87)(cid:75) (cid:72)(cid:80)(cid:83)(cid:87)(cid:92) (cid:72)(cid:79)(cid:16) (cid:72)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86)(cid:17) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:21)(cid:20)(cid:21)(cid:177)(cid:21)(cid:20)(cid:25)(cid:17) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:48)(cid:72)(cid:72)(cid:87)(cid:76)(cid:81)(cid:74) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:36)(cid:86)(cid:86)(cid:82)(cid:70)(cid:76)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:38)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86)(cid:17) (cid:38)(cid:75)(cid:72)(cid:81) (cid:38)(cid:75)(cid:72)(cid:81) (cid:68)(cid:81)(cid:71) (cid:57)(cid:76)(cid:81)(cid:70)(cid:72)(cid:81)(cid:87) (cid:49)(cid:74)(cid:17) (cid:21)(cid:19)(cid:20)(cid:24)(cid:17) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:93)(cid:72)(cid:85)(cid:82) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81)(cid:29) (cid:36) (cid:77)(cid:82)(cid:76)(cid:81)(cid:87) (cid:88)(cid:81)(cid:86)(cid:88)(cid:83)(cid:72)(cid:85)(cid:89)(cid:76)(cid:86)(cid:72)(cid:71) (cid:71)(cid:76)(cid:86)(cid:70)(cid:82)(cid:88)(cid:85)(cid:86)(cid:72)(cid:16)(cid:68)(cid:90)(cid:68)(cid:85)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:85)(cid:76)(cid:89)(cid:68)(cid:79)(cid:76)(cid:81)(cid:74) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72)(cid:16)(cid:82)(cid:73)(cid:16)(cid:87)(cid:75)(cid:72)(cid:16)(cid:68)(cid:85)(cid:87) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:89)(cid:72)(cid:85)(cid:86)(cid:17) (cid:44)(cid:81) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:24)(cid:22)(cid:85)(cid:71) (cid:36)(cid:81)(cid:81)(cid:88)(cid:68)(cid:79) (cid:48)(cid:72)(cid:72)(cid:87)(cid:76)(cid:81)(cid:74) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:36)(cid:86)(cid:86)(cid:82)(cid:70)(cid:76)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:38)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:26)(cid:87)(cid:75) (cid:44)(cid:81)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:45)(cid:82)(cid:76)(cid:81)(cid:87) (cid:38)(cid:82)(cid:81)(cid:16) (cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:49)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74)(cid:17) (cid:15) (cid:89)(cid:82)(cid:79)(cid:16) (cid:88)(cid:80)(cid:72) (cid:21)(cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:22)(cid:21)(cid:19)(cid:177)(cid:22)(cid:21)(cid:25)(cid:17) (cid:55)(cid:68)(cid:74)(cid:92)(cid:82)(cid:88)(cid:81)(cid:74) (cid:38)(cid:75)(cid:88)(cid:81)(cid:74) (cid:68)(cid:81)(cid:71) (cid:39)(cid:68)(cid:81)(cid:76)(cid:72)(cid:79) (cid:42)(cid:76)(cid:79)(cid:71)(cid:72)(cid:68)(cid:17) (cid:21)(cid:19)(cid:20)(cid:19)(cid:17) (cid:40)(cid:73)(cid:16) (cid:73)(cid:72)(cid:70)(cid:87)(cid:86) (cid:82)(cid:73) (cid:72)(cid:80)(cid:83)(cid:87)(cid:92) (cid:70)(cid:68)(cid:87)(cid:72)(cid:74)(cid:82)(cid:85)(cid:76)(cid:72)(cid:86) (cid:82)(cid:81) (cid:80)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:72) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:79)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:25)(cid:22)(cid:25)(cid:177)(cid:25)(cid:23)(cid:24)(cid:17) (cid:54)(cid:88)(cid:86)(cid:68)(cid:81) (cid:51) (cid:38)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:86)(cid:72) (cid:68)(cid:81)(cid:71) (cid:48)(cid:68)(cid:85)(cid:87)(cid:75)(cid:68) (cid:54)(cid:87)(cid:82)(cid:81)(cid:72) (cid:51)(cid:68)(cid:79)(cid:80)(cid:72)(cid:85)(cid:17) (cid:21)(cid:19)(cid:19)(cid:25)(cid:17) (cid:51)(cid:85)(cid:82)(cid:81)(cid:82)(cid:80)(cid:76)(cid:81)(cid:68)(cid:79) (cid:68)(cid:81)(cid:68)(cid:83)(cid:75)(cid:82)(cid:85)(cid:68) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:17) (cid:38)(cid:76)(cid:87)(cid:72)(cid:86)(cid:72)(cid:72)(cid:85)(cid:17) (cid:45)(cid:72)(cid:73)(cid:73)(cid:85)(cid:72)(cid:92) (cid:47)(cid:17) (cid:40)(cid:79)(cid:80)(cid:68)(cid:81)(cid:17) (cid:20)(cid:28)(cid:28)(cid:20)(cid:17) (cid:39)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:72)(cid:71) (cid:53)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:15) (cid:54)(cid:76)(cid:80)(cid:83)(cid:79)(cid:72) (cid:53)(cid:72)(cid:70)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:87) (cid:49)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86)(cid:15) (cid:36)(cid:81)(cid:71) (cid:42)(cid:85)(cid:68)(cid:80)(cid:16) (cid:80)(cid:68)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79) (cid:54)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72) (cid:17) (cid:54)(cid:83)(cid:85)(cid:76)(cid:81)(cid:74)(cid:72)(cid:85) (cid:56)(cid:54)(cid:17) (cid:38)(cid:75)(cid:85)(cid:76)(cid:86) (cid:42)(cid:76)(cid:68)(cid:81)(cid:81)(cid:72)(cid:79)(cid:79)(cid:68)(cid:15) (cid:53)(cid:68)(cid:81)(cid:86)(cid:82)(cid:80) (cid:46) (cid:58)(cid:76)(cid:81)(cid:71)(cid:72)(cid:85)(cid:15) (cid:68)(cid:81)(cid:71) (cid:54)(cid:87)(cid:68)(cid:70)(cid:92) (cid:51)(cid:72)(cid:16) (cid:87)(cid:72)(cid:85)(cid:86)(cid:72)(cid:81)(cid:17) (cid:21)(cid:19)(cid:20)(cid:26)(cid:17) (cid:39)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:72)(cid:85)(cid:86)(cid:82)(cid:81)(cid:68)(cid:79) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:16) (cid:72)(cid:85)(cid:92) (cid:76)(cid:81) (cid:70)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:86)(cid:80)(cid:86)(cid:17) (cid:49)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:40)(cid:81)(cid:74)(cid:76)(cid:16) (cid:81)(cid:72)(cid:72)(cid:85)(cid:76)(cid:81)(cid:74) (cid:15) (cid:21)(cid:22)(cid:11)(cid:19)(cid:25)(cid:12)(cid:29)(cid:28)(cid:19)(cid:24)(cid:177)(cid:28)(cid:21)(cid:26)(cid:17) (cid:48)(cid:76)(cid:78)(cid:68)(cid:72)(cid:79) (cid:43)(cid:72)(cid:81)(cid:68)(cid:73)(cid:73)(cid:15) (cid:45)(cid:68)(cid:86)(cid:82)(cid:81) (cid:58)(cid:72)(cid:86)(cid:87)(cid:82)(cid:81)(cid:15) (cid:36)(cid:85)(cid:87)(cid:75)(cid:88)(cid:85) (cid:54)(cid:93)(cid:79)(cid:68)(cid:80)(cid:15) (cid:36)(cid:81)(cid:16) (cid:87)(cid:82)(cid:76)(cid:81)(cid:72) (cid:37)(cid:82)(cid:85)(cid:71)(cid:72)(cid:86)(cid:15) (cid:68)(cid:81)(cid:71) (cid:60)(cid:68)(cid:81)(cid:81) (cid:47)(cid:72)(cid:70)(cid:88)(cid:81)(cid:17) (cid:21)(cid:19)(cid:20)(cid:26)(cid:17) (cid:55)(cid:85)(cid:68)(cid:70)(cid:78)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:79)(cid:71) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) (cid:90)(cid:76)(cid:87)(cid:75) (cid:85)(cid:72)(cid:70)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:87) (cid:72)(cid:81)(cid:87)(cid:76)(cid:87)(cid:92) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86)(cid:17) (cid:44)(cid:81)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:47)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:53)(cid:72)(cid:83)(cid:85)(cid:72)(cid:16) (cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:17) (cid:46)(cid:68)(cid:85)(cid:79) (cid:48)(cid:82)(cid:85)(cid:76)(cid:87)(cid:93) (cid:43)(cid:72)(cid:85)(cid:80)(cid:68)(cid:81)(cid:81)(cid:15) (cid:55)(cid:82)(cid:80)(cid:68)(cid:86) (cid:46)(cid:82)(cid:70)(cid:76)(cid:86)(cid:78)(cid:234)(cid:15) (cid:40)(cid:71)(cid:90)(cid:68)(cid:85)(cid:71) (cid:42)(cid:85)(cid:72)(cid:73)(cid:72)(cid:81)(cid:86)(cid:87)(cid:72)(cid:87)(cid:87)(cid:72)(cid:15) (cid:47)(cid:68)(cid:86)(cid:86)(cid:72) (cid:40)(cid:86)(cid:83)(cid:72)(cid:75)(cid:82)(cid:79)(cid:87)(cid:15) (cid:58)(cid:76)(cid:79)(cid:79) (cid:46)(cid:68)(cid:92)(cid:15) (cid:48)(cid:88)(cid:86)(cid:87)(cid:68)(cid:73)(cid:68) (cid:54)(cid:88)(cid:79)(cid:72)(cid:92)(cid:80)(cid:68)(cid:81)(cid:15) (cid:68)(cid:81)(cid:71) (cid:51)(cid:75)(cid:76)(cid:79) (cid:37)(cid:79)(cid:88)(cid:81)(cid:86)(cid:82)(cid:80)(cid:17) (cid:21)(cid:19)(cid:20)(cid:24)(cid:17) (cid:55)(cid:72)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:74) (cid:80)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86) (cid:87)(cid:82) (cid:85)(cid:72)(cid:68)(cid:71) (cid:68)(cid:81)(cid:71) (cid:70)(cid:82)(cid:80)(cid:83)(cid:85)(cid:72)(cid:75)(cid:72)(cid:81)(cid:71)(cid:17) (cid:36)(cid:71)(cid:89)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:76)(cid:81) (cid:49)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:44)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74) (cid:54)(cid:92)(cid:86)(cid:87)(cid:72)(cid:80)(cid:86)(cid:17) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:20)(cid:25)(cid:28)(cid:22)(cid:177)(cid:20)(cid:26)(cid:19)(cid:20)(cid:17) (cid:45)(cid:72)(cid:85)(cid:85)(cid:92) (cid:53) (cid:43)(cid:82)(cid:69)(cid:69)(cid:86)(cid:17) (cid:20)(cid:28)(cid:26)(cid:27)(cid:17) (cid:53)(cid:72)(cid:86)(cid:82)(cid:79)(cid:89)(cid:76)(cid:81)(cid:74) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:16) (cid:72)(cid:81)(cid:70)(cid:72)(cid:86)(cid:17) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:68) (cid:15) (cid:23)(cid:23)(cid:11)(cid:23)(cid:12)(cid:29)(cid:22)(cid:20)(cid:20)(cid:177)(cid:22)(cid:22)(cid:27)(cid:17) (cid:60)(cid:82)(cid:82)(cid:81) (cid:46)(cid:76)(cid:80)(cid:15) (cid:38)(cid:68)(cid:85)(cid:79) (cid:39)(cid:72)(cid:81)(cid:87)(cid:82)(cid:81)(cid:15) (cid:47)(cid:88)(cid:82)(cid:81)(cid:74) (cid:43)(cid:82)(cid:68)(cid:81)(cid:74)(cid:15) (cid:68)(cid:81)(cid:71) (cid:36)(cid:79)(cid:72)(cid:91)(cid:68)(cid:81)(cid:71)(cid:72)(cid:85) (cid:48) (cid:53)(cid:88)(cid:86)(cid:75)(cid:17) (cid:21)(cid:19)(cid:20)(cid:26)(cid:17) (cid:54)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72)(cid:71) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86)(cid:17) (cid:44)(cid:81)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:47)(cid:72)(cid:68)(cid:85)(cid:81)(cid:16) (cid:76)(cid:81)(cid:74) (cid:53)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:17) 901 (cid:60)(cid:82)(cid:88)(cid:81)(cid:74) (cid:45)(cid:82)(cid:82) (cid:46)(cid:76)(cid:80)(cid:17) (cid:21)(cid:19)(cid:19)(cid:19)(cid:17) (cid:54)(cid:88)(cid:69)(cid:77)(cid:72)(cid:70)(cid:87)(cid:18)(cid:82)(cid:69)(cid:77)(cid:72)(cid:70)(cid:87) (cid:71)(cid:85)(cid:82)(cid:83) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:68)(cid:70)(cid:84)(cid:88)(cid:76)(cid:86)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:78)(cid:82)(cid:85)(cid:72)(cid:68)(cid:81)(cid:29) (cid:36) (cid:70)(cid:85)(cid:82)(cid:86)(cid:86)(cid:16)(cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:70)(cid:82)(cid:80)(cid:16) (cid:83)(cid:68)(cid:85)(cid:76)(cid:86)(cid:82)(cid:81)(cid:17) (cid:45)(cid:82)(cid:88)(cid:85)(cid:81)(cid:68)(cid:79) (cid:82)(cid:73) (cid:40)(cid:68)(cid:86)(cid:87) (cid:36)(cid:86)(cid:76)(cid:68)(cid:81) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86)(cid:17) (cid:15) (cid:28)(cid:11)(cid:23)(cid:12)(cid:29)(cid:22)(cid:21)(cid:24)(cid:177)(cid:22)(cid:24)(cid:20)(cid:17) (cid:39)(cid:76)(cid:72)(cid:71)(cid:72)(cid:85)(cid:76)(cid:78) (cid:51) (cid:46)(cid:76)(cid:81)(cid:74)(cid:80)(cid:68) (cid:68)(cid:81)(cid:71) (cid:45)(cid:76)(cid:80)(cid:80)(cid:92) (cid:37)(cid:68)(cid:17) (cid:21)(cid:19)(cid:20)(cid:24)(cid:17) (cid:36)(cid:71)(cid:68)(cid:80)(cid:29) (cid:36) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:73)(cid:82)(cid:85) (cid:86)(cid:87)(cid:82)(cid:70)(cid:75)(cid:68)(cid:86)(cid:87)(cid:76)(cid:70) (cid:82)(cid:83)(cid:87)(cid:76)(cid:80)(cid:76)(cid:93)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:44)(cid:81)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:47)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:53)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:17) (cid:41)(cid:68)(cid:81)(cid:74) (cid:46)(cid:82)(cid:81)(cid:74) (cid:68)(cid:81)(cid:71) (cid:42)(cid:88)(cid:82)(cid:71)(cid:82)(cid:81)(cid:74) (cid:61)(cid:75)(cid:82)(cid:88)(cid:17) (cid:21)(cid:19)(cid:20)(cid:19)(cid:17) (cid:36) (cid:87)(cid:85)(cid:72)(cid:72) (cid:78)(cid:72)(cid:85)(cid:81)(cid:72)(cid:79)(cid:16)(cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:88)(cid:81)(cid:76)(cid:73)(cid:76)(cid:72)(cid:71) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:73)(cid:82)(cid:85) (cid:70)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:93)(cid:72)(cid:85)(cid:82) (cid:68)(cid:81)(cid:68)(cid:83)(cid:75)(cid:82)(cid:85)(cid:68) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:44)(cid:81) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:21)(cid:19)(cid:20)(cid:19) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:40)(cid:80)(cid:83)(cid:76)(cid:85)(cid:76)(cid:70)(cid:68)(cid:79) (cid:48)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:76)(cid:81) (cid:49)(cid:68)(cid:87)(cid:16) (cid:88)(cid:85)(cid:68)(cid:79) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74)(cid:17) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:27)(cid:27)(cid:21)(cid:177)(cid:27)(cid:28)(cid:20)(cid:17) (cid:36)(cid:86)(cid:16) (cid:86)(cid:82)(cid:70)(cid:76)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:38)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86)(cid:17) (cid:54)(cid:75)(cid:72)(cid:81) (cid:47)(cid:76)(cid:15) (cid:61)(cid:75)(cid:72) (cid:61)(cid:75)(cid:68)(cid:82)(cid:15) (cid:53)(cid:72)(cid:81)(cid:73)(cid:72)(cid:81) (cid:43)(cid:88)(cid:15) (cid:58)(cid:72)(cid:81)(cid:86)(cid:76) (cid:47)(cid:76)(cid:15) (cid:55)(cid:68)(cid:82) (cid:47)(cid:76)(cid:88)(cid:15) (cid:68)(cid:81)(cid:71) (cid:59)(cid:76)(cid:68)(cid:82)(cid:92)(cid:82)(cid:81)(cid:74) (cid:39)(cid:88)(cid:17) (cid:21)(cid:19)(cid:20)(cid:27)(cid:17) (cid:36)(cid:81)(cid:68)(cid:79)(cid:82)(cid:74)(cid:76)(cid:70)(cid:68)(cid:79) (cid:85)(cid:72)(cid:68)(cid:86)(cid:82)(cid:81)(cid:16) (cid:76)(cid:81)(cid:74) (cid:82)(cid:81) (cid:70)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:80)(cid:82)(cid:85)(cid:83)(cid:75)(cid:82)(cid:79)(cid:82)(cid:74)(cid:76)(cid:70)(cid:68)(cid:79) (cid:68)(cid:81)(cid:71) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:85)(cid:72)(cid:16) (cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:17) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:48)(cid:72)(cid:72)(cid:87)(cid:76)(cid:81)(cid:74) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:36)(cid:86)(cid:86)(cid:82)(cid:16) (cid:70)(cid:76)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:38)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86)(cid:17) (cid:36)(cid:79)(cid:72)(cid:91)(cid:68)(cid:81)(cid:71)(cid:72)(cid:85) (cid:43) (cid:48)(cid:76)(cid:79)(cid:79)(cid:72)(cid:85)(cid:15) (cid:36)(cid:71)(cid:68)(cid:80) (cid:41)(cid:76)(cid:86)(cid:70)(cid:75)(cid:15) (cid:45)(cid:72)(cid:86)(cid:86)(cid:72) (cid:39)(cid:82)(cid:71)(cid:74)(cid:72)(cid:15) (cid:36)(cid:80)(cid:76)(cid:85)(cid:75)(cid:82)(cid:86)(cid:86)(cid:72)(cid:76)(cid:81) (cid:46)(cid:68)(cid:85)(cid:76)(cid:80)(cid:76)(cid:15) (cid:36)(cid:81)(cid:87)(cid:82)(cid:76)(cid:81)(cid:72) (cid:37)(cid:82)(cid:85)(cid:71)(cid:72)(cid:86)(cid:15) (cid:68)(cid:81)(cid:71) (cid:45)(cid:68)(cid:86)(cid:82)(cid:81) (cid:58)(cid:72)(cid:86)(cid:87)(cid:82)(cid:81)(cid:17) (cid:21)(cid:19)(cid:20)(cid:25)(cid:17) (cid:46)(cid:72)(cid:92)(cid:16)(cid:89)(cid:68)(cid:79)(cid:88)(cid:72) (cid:80)(cid:72)(cid:80)(cid:82)(cid:85)(cid:92) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86) (cid:73)(cid:82)(cid:85) (cid:71)(cid:76)(cid:85)(cid:72)(cid:70)(cid:87)(cid:79)(cid:92) (cid:85)(cid:72)(cid:68)(cid:71)(cid:76)(cid:81)(cid:74) (cid:71)(cid:82)(cid:70)(cid:88)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86)(cid:17) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:40)(cid:80)(cid:16) (cid:83)(cid:76)(cid:85)(cid:76)(cid:70)(cid:68)(cid:79) (cid:48)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:76)(cid:81) (cid:49)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:16) (cid:76)(cid:81)(cid:74)(cid:17) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:20)(cid:23)(cid:19)(cid:19)(cid:177)(cid:20)(cid:23)(cid:19)(cid:28)(cid:17) (cid:57)(cid:76)(cid:81)(cid:70)(cid:72)(cid:81)(cid:87) (cid:49)(cid:74)(cid:17) (cid:21)(cid:19)(cid:19)(cid:26)(cid:17) (cid:54)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86) (cid:76)(cid:81)(cid:71)(cid:88)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:70)(cid:82)(cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:44)(cid:81) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:48)(cid:72)(cid:72)(cid:87)(cid:76)(cid:81)(cid:74) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:36)(cid:86)(cid:86)(cid:82)(cid:70)(cid:76)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:38)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86)(cid:15) (cid:45)(cid:88)(cid:81)(cid:72) (cid:21)(cid:22)(cid:16)(cid:22)(cid:19)(cid:15) (cid:21)(cid:19)(cid:19)(cid:26)(cid:15) (cid:51)(cid:85)(cid:68)(cid:74)(cid:88)(cid:72)(cid:15) (cid:38)(cid:93)(cid:72)(cid:70)(cid:75) (cid:53)(cid:72)(cid:83)(cid:88)(cid:69)(cid:79)(cid:76)(cid:70) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:24)(cid:22)(cid:25)(cid:177)(cid:24)(cid:23)(cid:22)(cid:17) (cid:54)(cid:68)(cid:76)(cid:81)(cid:69)(cid:68)(cid:92)(cid:68)(cid:85) (cid:54)(cid:88)(cid:78)(cid:75)(cid:69)(cid:68)(cid:68)(cid:87)(cid:68)(cid:85)(cid:15) (cid:45)(cid:68)(cid:86)(cid:82)(cid:81) (cid:58)(cid:72)(cid:86)(cid:87)(cid:82)(cid:81)(cid:15) (cid:53)(cid:82)(cid:69) (cid:41)(cid:72)(cid:85)(cid:16) (cid:74)(cid:88)(cid:86)(cid:15) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17) (cid:21)(cid:19)(cid:20)(cid:24)(cid:17) (cid:40)(cid:81)(cid:71)(cid:16)(cid:87)(cid:82)(cid:16)(cid:72)(cid:81)(cid:71) (cid:80)(cid:72)(cid:80)(cid:82)(cid:85)(cid:92) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86)(cid:17) (cid:44)(cid:81) (cid:36)(cid:71)(cid:89)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:76)(cid:81) (cid:49)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:44)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74) (cid:54)(cid:92)(cid:86)(cid:87)(cid:72)(cid:80)(cid:86)(cid:17) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:21)(cid:23)(cid:23)(cid:19)(cid:177)(cid:21)(cid:23)(cid:23)(cid:27)(cid:17) (cid:47)(cid:82)(cid:81)(cid:74)(cid:92)(cid:88)(cid:72) (cid:58)(cid:68)(cid:81)(cid:74)(cid:15) (cid:61)(cid:75)(cid:68)(cid:82)(cid:83)(cid:72)(cid:81)(cid:74) (cid:55)(cid:88)(cid:15) (cid:54)(cid:75)(cid:88)(cid:80)(cid:76)(cid:81)(cid:74) (cid:54)(cid:75)(cid:76)(cid:15) (cid:55)(cid:82)(cid:81)(cid:74) (cid:61)(cid:75)(cid:68)(cid:81)(cid:74)(cid:15) (cid:60)(cid:89)(cid:72)(cid:87)(cid:87)(cid:72) (cid:42)(cid:85)(cid:68)(cid:75)(cid:68)(cid:80)(cid:15) (cid:68)(cid:81)(cid:71) (cid:52)(cid:88)(cid:81) (cid:47)(cid:76)(cid:88)(cid:17) (cid:21)(cid:19)(cid:20)(cid:27)(cid:17) (cid:55)(cid:85)(cid:68)(cid:81)(cid:86)(cid:79)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74) (cid:83)(cid:85)(cid:82)(cid:16)(cid:71)(cid:85)(cid:82)(cid:83) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:85)(cid:72)(cid:70)(cid:82)(cid:81)(cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86)(cid:17) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89) (cid:83)(cid:85)(cid:72)(cid:83)(cid:85)(cid:76)(cid:81)(cid:87) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89)(cid:29)(cid:20)(cid:27)(cid:19)(cid:20)(cid:17)(cid:19)(cid:22)(cid:21)(cid:24)(cid:26) (cid:17) (cid:47)(cid:82)(cid:81)(cid:74)(cid:92)(cid:88)(cid:72) (cid:58)(cid:68)(cid:81)(cid:74)(cid:15) (cid:61)(cid:75)(cid:68)(cid:82)(cid:83)(cid:72)(cid:81)(cid:74) (cid:55)(cid:88)(cid:15) (cid:59)(cid:76)(cid:68)(cid:82)(cid:77)(cid:88)(cid:81) (cid:61)(cid:75)(cid:68)(cid:81)(cid:74)(cid:15) (cid:43)(cid:68)(cid:81)(cid:74) (cid:47)(cid:76)(cid:15) (cid:36)(cid:81)(cid:71)(cid:92) (cid:58)(cid:68)(cid:92)(cid:15) (cid:68)(cid:81)(cid:71) (cid:52)(cid:88)(cid:81) (cid:47)(cid:76)(cid:88)(cid:17) (cid:21)(cid:19)(cid:20)(cid:25)(cid:68)(cid:17) (cid:36) (cid:81)(cid:82)(cid:89)(cid:72)(cid:79) (cid:68)(cid:83)(cid:83)(cid:85)(cid:82)(cid:68)(cid:70)(cid:75) (cid:87)(cid:82) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89) (cid:83)(cid:85)(cid:72)(cid:83)(cid:85)(cid:76)(cid:81)(cid:87) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89)(cid:29)(cid:20)(cid:25)(cid:19)(cid:23)(cid:17)(cid:19)(cid:25)(cid:21)(cid:27)(cid:24) (cid:17) (cid:47)(cid:82)(cid:81)(cid:74)(cid:92)(cid:88)(cid:72) (cid:58)(cid:68)(cid:81)(cid:74)(cid:15) (cid:59)(cid:76)(cid:68)(cid:82)(cid:77)(cid:88)(cid:81) (cid:61)(cid:75)(cid:68)(cid:81)(cid:74)(cid:15) (cid:61)(cid:75)(cid:68)(cid:82)(cid:83)(cid:72)(cid:81)(cid:74) (cid:55)(cid:88)(cid:15) (cid:43)(cid:68)(cid:81)(cid:74) (cid:47)(cid:76)(cid:15) (cid:68)(cid:81)(cid:71) (cid:52)(cid:88)(cid:81) (cid:47)(cid:76)(cid:88)(cid:17) (cid:21)(cid:19)(cid:20)(cid:25)(cid:69)(cid:17) (cid:39)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:16) (cid:81)(cid:82)(cid:88)(cid:81) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:71)(cid:76)(cid:68)(cid:79)(cid:82)(cid:74)(cid:88)(cid:72) (cid:80)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:72) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:79)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:44)(cid:81) (cid:36)(cid:70)(cid:82)(cid:88)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86)(cid:15) (cid:54)(cid:83)(cid:72)(cid:72)(cid:70)(cid:75) (cid:68)(cid:81)(cid:71) (cid:54)(cid:76)(cid:74)(cid:81)(cid:68)(cid:79) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:16) (cid:76)(cid:81)(cid:74) (cid:11)(cid:44)(cid:38)(cid:36)(cid:54)(cid:54)(cid:51)(cid:12)(cid:15) (cid:21)(cid:19)(cid:20)(cid:25) (cid:44)(cid:40)(cid:40)(cid:40) (cid:44)(cid:81)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:16) (cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:25)(cid:20)(cid:20)(cid:19)(cid:177)(cid:25)(cid:20)(cid:20)(cid:23)(cid:17) (cid:44)(cid:40)(cid:40)(cid:40)(cid:17) (cid:45)(cid:68)(cid:86)(cid:82)(cid:81) (cid:58)(cid:72)(cid:86)(cid:87)(cid:82)(cid:81)(cid:15) (cid:54)(cid:88)(cid:80)(cid:76)(cid:87) (cid:38)(cid:75)(cid:82)(cid:83)(cid:85)(cid:68)(cid:15) (cid:68)(cid:81)(cid:71) (cid:36)(cid:81)(cid:87)(cid:82)(cid:76)(cid:81)(cid:72) (cid:37)(cid:82)(cid:85)(cid:71)(cid:72)(cid:86)(cid:17) (cid:21)(cid:19)(cid:20)(cid:24)(cid:17) (cid:48)(cid:72)(cid:80)(cid:82)(cid:85)(cid:92) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86)(cid:17) (cid:44)(cid:81)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:16) (cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:47)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:53)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:11)(cid:44)(cid:38)(cid:47)(cid:53)(cid:12) (cid:17) (cid:38)(cid:75)(cid:72)(cid:81) (cid:59)(cid:76)(cid:81)(cid:74)(cid:15) (cid:60)(cid:88) (cid:58)(cid:88)(cid:15) (cid:58)(cid:72)(cid:76) (cid:58)(cid:88)(cid:15) (cid:60)(cid:68)(cid:79)(cid:82)(cid:88) (cid:43)(cid:88)(cid:68)(cid:81)(cid:74)(cid:15) (cid:68)(cid:81)(cid:71) (cid:48)(cid:76)(cid:81)(cid:74) (cid:61)(cid:75)(cid:82)(cid:88)(cid:17) (cid:21)(cid:19)(cid:20)(cid:27)(cid:17) (cid:43)(cid:76)(cid:72)(cid:85)(cid:68)(cid:85)(cid:70)(cid:75)(cid:76)(cid:70)(cid:68)(cid:79) (cid:85)(cid:72)(cid:70)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:87) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78) (cid:73)(cid:82)(cid:85) (cid:85)(cid:72)(cid:86)(cid:83)(cid:82)(cid:81)(cid:86)(cid:72) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:49)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:36)(cid:85)(cid:87)(cid:76)(cid:73)(cid:76)(cid:70)(cid:76)(cid:68)(cid:79) (cid:44)(cid:81)(cid:87)(cid:72)(cid:79)(cid:79)(cid:76)(cid:74)(cid:72)(cid:81)(cid:70)(cid:72)(cid:17) (cid:49)(cid:76)(cid:68)(cid:81)(cid:90)(cid:72)(cid:81) (cid:59)(cid:88)(cid:72) (cid:68)(cid:81)(cid:71) (cid:60)(cid:68)(cid:84)(cid:76)(cid:81) (cid:60)(cid:68)(cid:81)(cid:74)(cid:17) (cid:21)(cid:19)(cid:20)(cid:22)(cid:17) (cid:39)(cid:72)(cid:83)(cid:72)(cid:81)(cid:71)(cid:72)(cid:81)(cid:70)(cid:92)(cid:16) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:72)(cid:80)(cid:83)(cid:87)(cid:92) (cid:70)(cid:68)(cid:87)(cid:72)(cid:74)(cid:82)(cid:85)(cid:92) (cid:71)(cid:72)(cid:87)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:89)(cid:76)(cid:68) (cid:83)(cid:75)(cid:85)(cid:68)(cid:86)(cid:72) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:16) (cid:87)(cid:88)(cid:85)(cid:72) (cid:87)(cid:85)(cid:72)(cid:72)(cid:86)(cid:17) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:20)(cid:19)(cid:24)(cid:20)(cid:177)(cid:20)(cid:19)(cid:25)(cid:19)(cid:17) (cid:60)(cid:68)(cid:84)(cid:76)(cid:81) (cid:60)(cid:68)(cid:81)(cid:74)(cid:15) (cid:60)(cid:68)(cid:79)(cid:76)(cid:81) (cid:47)(cid:76)(cid:88)(cid:15) (cid:68)(cid:81)(cid:71) (cid:49)(cid:76)(cid:68)(cid:81)(cid:90)(cid:72)(cid:81) (cid:59)(cid:88)(cid:72)(cid:17) (cid:21)(cid:19)(cid:20)(cid:24)(cid:17) (cid:53)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:76)(cid:81)(cid:74) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86) (cid:73)(cid:85)(cid:82)(cid:80) (cid:70)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:87)(cid:72)(cid:91)(cid:87) (cid:80)(cid:72)(cid:86)(cid:86)(cid:68)(cid:74)(cid:72)(cid:86)(cid:17) (cid:44)(cid:81) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:24)(cid:22)(cid:85)(cid:71) (cid:36)(cid:81)(cid:81)(cid:88)(cid:68)(cid:79) (cid:48)(cid:72)(cid:72)(cid:87)(cid:76)(cid:81)(cid:74) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:36)(cid:86)(cid:86)(cid:82)(cid:70)(cid:76)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:38)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:26)(cid:87)(cid:75) (cid:44)(cid:81)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:45)(cid:82)(cid:76)(cid:81)(cid:87) (cid:38)(cid:82)(cid:81)(cid:16) (cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:49)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74)(cid:17) (cid:15) (cid:89)(cid:82)(cid:79)(cid:16) (cid:88)(cid:80)(cid:72) (cid:21)(cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:22)(cid:19)(cid:28)(cid:177)(cid:22)(cid:20)(cid:22)(cid:17) (cid:61)(cid:76)(cid:70)(cid:75)(cid:68)(cid:82) (cid:60)(cid:68)(cid:81)(cid:74)(cid:15) (cid:39)(cid:76)(cid:92)(cid:76) (cid:60)(cid:68)(cid:81)(cid:74)(cid:15) (cid:38)(cid:75)(cid:85)(cid:76)(cid:86) (cid:39)(cid:92)(cid:72)(cid:85)(cid:15) (cid:59)(cid:76)(cid:68)(cid:82)(cid:71)(cid:82)(cid:81)(cid:74) (cid:43)(cid:72)(cid:15) (cid:36)(cid:79)(cid:72)(cid:91) (cid:54)(cid:80)(cid:82)(cid:79)(cid:68)(cid:15) (cid:68)(cid:81)(cid:71) (cid:40)(cid:71)(cid:88)(cid:68)(cid:85)(cid:71) (cid:43)(cid:82)(cid:89)(cid:92)(cid:17) (cid:21)(cid:19)(cid:20)(cid:25)(cid:17) (cid:43)(cid:76)(cid:72)(cid:85)(cid:16) (cid:68)(cid:85)(cid:70)(cid:75)(cid:76)(cid:70)(cid:68)(cid:79) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86) (cid:73)(cid:82)(cid:85) (cid:71)(cid:82)(cid:70)(cid:88)(cid:80)(cid:72)(cid:81)(cid:87) (cid:70)(cid:79)(cid:68)(cid:86)(cid:16) (cid:86)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:44)(cid:81) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:49)(cid:82)(cid:85)(cid:87)(cid:75) (cid:36)(cid:80)(cid:72)(cid:85)(cid:16) (cid:76)(cid:70)(cid:68)(cid:81) (cid:38)(cid:75)(cid:68)(cid:83)(cid:87)(cid:72)(cid:85) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:36)(cid:86)(cid:86)(cid:82)(cid:70)(cid:76)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:38)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86)(cid:29) (cid:43)(cid:88)(cid:80)(cid:68)(cid:81) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:55)(cid:72)(cid:70)(cid:75)(cid:81)(cid:82)(cid:79)(cid:82)(cid:16) (cid:74)(cid:76)(cid:72)(cid:86)(cid:17) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:20)(cid:23)(cid:27)(cid:19)(cid:177)(cid:20)(cid:23)(cid:27)(cid:28)(cid:17) (cid:52)(cid:76)(cid:81)(cid:74)(cid:92)(cid:88) (cid:60)(cid:76)(cid:81)(cid:15) (cid:60)(cid:88) (cid:61)(cid:75)(cid:68)(cid:81)(cid:74)(cid:15) (cid:58)(cid:72)(cid:76)(cid:81)(cid:68)(cid:81) (cid:61)(cid:75)(cid:68)(cid:81)(cid:74)(cid:15) (cid:68)(cid:81)(cid:71) (cid:55)(cid:76)(cid:81)(cid:74) (cid:47)(cid:76)(cid:88)(cid:17) (cid:21)(cid:19)(cid:20)(cid:26)(cid:17) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:93)(cid:72)(cid:85)(cid:82) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:90)(cid:76)(cid:87)(cid:75) (cid:71)(cid:72)(cid:72)(cid:83) (cid:80)(cid:72)(cid:80)(cid:82)(cid:85)(cid:92) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:17) (cid:44)(cid:81) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:40)(cid:80)(cid:16) (cid:83)(cid:76)(cid:85)(cid:76)(cid:70)(cid:68)(cid:79) (cid:48)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:76)(cid:81) (cid:49)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:16) (cid:76)(cid:81)(cid:74)(cid:17) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:20)(cid:22)(cid:19)(cid:28)(cid:177)(cid:20)(cid:22)(cid:20)(cid:27)(cid:17) (cid:52)(cid:76)(cid:81)(cid:74)(cid:92)(cid:88) (cid:60)(cid:76)(cid:81)(cid:15) (cid:60)(cid:88) (cid:61)(cid:75)(cid:68)(cid:81)(cid:74)(cid:15) (cid:58)(cid:72)(cid:76)(cid:81)(cid:68)(cid:81) (cid:61)(cid:75)(cid:68)(cid:81)(cid:74)(cid:15) (cid:55)(cid:76)(cid:81)(cid:74) (cid:47)(cid:76)(cid:88)(cid:15) (cid:68)(cid:81)(cid:71) (cid:58)(cid:76)(cid:79)(cid:79)(cid:76)(cid:68)(cid:80) (cid:60)(cid:68)(cid:81)(cid:74) (cid:58)(cid:68)(cid:81)(cid:74)(cid:17) (cid:21)(cid:19)(cid:20)(cid:27)(cid:17) (cid:39)(cid:72)(cid:72)(cid:83) (cid:85)(cid:72)(cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:70)(cid:72)(cid:16) (cid:80)(cid:72)(cid:81)(cid:87) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:73)(cid:82)(cid:85) (cid:70)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:93)(cid:72)(cid:85)(cid:82) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:80)(cid:72)(cid:72)(cid:87)(cid:76)(cid:81)(cid:74) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:68)(cid:86)(cid:86)(cid:82)(cid:70)(cid:76)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:70)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:24)(cid:25)(cid:28)(cid:177)(cid:24)(cid:26)(cid:27)(cid:17) (cid:58)(cid:72)(cid:76)(cid:81)(cid:68)(cid:81) (cid:61)(cid:75)(cid:68)(cid:81)(cid:74)(cid:15) (cid:55)(cid:76)(cid:81)(cid:74) (cid:47)(cid:76)(cid:88)(cid:15) (cid:52)(cid:76)(cid:81)(cid:74)(cid:92)(cid:88) (cid:60)(cid:76)(cid:81)(cid:15) (cid:68)(cid:81)(cid:71) (cid:60)(cid:88) (cid:61)(cid:75)(cid:68)(cid:81)(cid:74)(cid:17) (cid:21)(cid:19)(cid:20)(cid:25)(cid:17) (cid:49)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:85)(cid:72)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85)(cid:92) (cid:80)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:72) (cid:73)(cid:82)(cid:85) (cid:70)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:72)(cid:71) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:17) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89)(cid:29) (cid:38)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:17) (cid:54)(cid:75)(cid:68)(cid:81)(cid:75)(cid:72)(cid:81)(cid:74) (cid:61)(cid:75)(cid:68)(cid:82) (cid:68)(cid:81)(cid:71) (cid:43)(cid:90)(cid:72)(cid:72) (cid:55)(cid:82)(cid:88) (cid:49)(cid:74)(cid:17) (cid:21)(cid:19)(cid:19)(cid:26)(cid:17) (cid:44)(cid:71)(cid:72)(cid:81)(cid:87)(cid:76)(cid:73)(cid:76)(cid:16) (cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:85)(cid:72)(cid:86)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:70)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:93)(cid:72)(cid:85)(cid:82) (cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:86)(cid:29) (cid:36) (cid:80)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:72) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:68)(cid:83)(cid:83)(cid:85)(cid:82)(cid:68)(cid:70)(cid:75)(cid:17) (cid:44)(cid:81) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:21)(cid:19)(cid:19)(cid:26) (cid:45)(cid:82)(cid:76)(cid:81)(cid:87) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:40)(cid:80)(cid:83)(cid:76)(cid:85)(cid:76)(cid:70)(cid:68)(cid:79) (cid:48)(cid:72)(cid:87)(cid:75)(cid:16) (cid:82)(cid:71)(cid:86) (cid:76)(cid:81) (cid:49)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74) (cid:68)(cid:81)(cid:71) (cid:38)(cid:82)(cid:80)(cid:16) (cid:83)(cid:88)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:49)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:47)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74)(cid:17)"
] | [
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Conditional Text Generation has drawn much attention as a topic of Natural Language Generation (NLG) which provides the possibility for humans to control the properties of generated contents.",
"Current conditional generation models cannot handle emerging conditions due to their joint end-to-end learning fashion.",
"When a new condition added, these techniques require full retraining.",
"In this paper, we present a new framework named P re-train and P lug-in V ariational A utoE ncoder (PPVAE) towards flexible conditional text generation.",
"PPVAE decouples the text generation module from the condition representation module to allow one-to-many conditional generation.",
"When a fresh condition emerges, only a lightweight network needs to be trained and works as a plug-in for PPVAE, which is efficient and desirable for real-world applications.",
"Extensive experiments demonstrate the superiority of PPVAE against the existing alternatives with better conditionality and diversity but less training effort.",
"1 1 Introduction Currently, neural generation techniques have powered many inspiring applications, e.g., poem generation (Yang et al., 2018), neural machine translation (NMT) (Bahdanau et al., 2015) and chatbot (Zhao et al., 2017).",
"Conditional (also known as controllable) text generation is an important task of text generation, aiming to generate realistic text that carries a specific attribute (e.g., positive or negative sentiment).",
"A common solution is to encode the condition into a vector representation and then integrate it with the text generation process (Kingma The first three authors contribute equally to this paper. Work done when Jialong Han was with Tencent AI Lab. Chenliang Li is the corresponding author. 1 The code is available at https://github.com/ WHUIR/PPVAE . et al., 2014; Hu et al., 2017; Mirza and Osindero, 2014).",
"These existing neural models have achieved encouraging results.",
"However, when a new condition is added (e.g., a new topic for categorical generation), they require a full retraining or fine-tuning.",
"This process is both time-consuming and computationally inefficient (Houlsby et al., 2019).",
"Both fine-tuning and retraining are not desirable in real-world applications since the delivery (e.g., transmitting updated weights through the Internet) and client-side re-deployment (e.g., distribute updated weights to users) of large-scale weights are often difficult.",
"Inspired by the recent success of Variational Auto-Encoder (VAE) (Kingma and Welling, 2014) based post-hoc conditional image generation strategy (Engel et al., 2018), we provide a new perspective for flexible conditional text generation.",
"We propose P re-train and P lug-in V ariational A uto-E ncoder (PPVAE), which decouples the text generation module from the condition representation module.",
"PPVAE is a hierarchical framework composed of two VAEs: (1) PRETRAINVAE , which derives the global latent space of text with its encoder (pre-trained global encoder) and learns to generate text based on an easily-accessible large unlabeled dataset with its decoder (pre-trained global decoder); (2) PLUGINVAE , which is a lightweight neural network that learns to transform vectors from the conditional latent space to the global latent space, and vice versa.",
"This mapping function can be easily learned with only a few conditional training samples.",
"In this sense, once we transform a latent variable (also known as latent code) randomly sampled from the conditional space distribution to the global space, the pre-trained global decoder is directly adopted for generation.",
"In other words, whenever a new condition emerges, we only need to train a PLUGINVAE and directly plug it into the framework.",
"Different from the existing end-to-end neural models (Mirza and Osindero, 2014; Sohn et al., 2015; Kingma et al., 2014), PPVAE focuses on the learning of pure transformation between the continuous latent spaces, instead of the tricky discrete text generation.",
"Once trained, PRETRAINVAE is fixed for text representation and generation under all conditions.",
"Our proposed framework decouples the conditional space learning from the text generation, endowing PPVAE with more flexibility when handling emerging conditions.",
"Also, training only a small conditional network for latent space transformation is much more efficient than co-training with the text generation.",
"Additionally, we can easily increase the capability of generation using a larger corpus or deeper neural networks for text encoding and decoding.",
"Our main contributions can be summarized as follows: (1) We propose a novel framework, PPVAE, for conditional text generation, which allows a separate training for a new condition without retraining the whole network.",
"(2) We conduct extensive experiments and analysis to verify the effectiveness of our proposed PPVAE.",
"Our framework achieves state-of-the-art performance on conditionality in both automatic and human evaluations.",
"Boosted by the recent success of deep learning technology, Natural Language Generation (NLG) has recently become popular in the NLP community.",
"Many great works have attempted to solve various subtasks like dialogue generation (Li et al., 2016), poetry generation (Yi et al., 2018) and story generation (Fan et al., 2018) and new techniques keep emerging (Bowman et al., 2016; Yu et al., 2017; Zhou et al., 2020).",
"However, due to the black-box nature of neural networks, the recent proposed generic models suffer the problem of lacking interpretability and controllability.",
"To handle this problem and support generating plausible text with a specified condition, conditional text generation (Kikuchi et al., 2016; Ficler and Goldberg, 2017; Hu et al., 2017) has recently attracted extensive attention.",
"Current research in this direction mainly falls into two fashions: the supervised methods and semi-supervised methods.",
"For supervised methods, Mirza and Osindero (2014); Sohn et al. (2015) first converted the condition information to one-hot vectors, then integrated them into a generator and a discriminator.",
"To enhance the correlation between structured conditional code and generated samples, Chen et al. (2016) adopted an extra adversarial classifier to infer the structured code from generated samples.",
"Wang and Wan (2018) used multiple generators for multiple conditions and a multi-class classifier to provide training signals for the learning of generators.",
"However, given only a limited number of conditional samples, semi-supervised methods are compulsory.",
"To utilize the implicit conditional distribution behind the unlabeled text, Kingma et al. (2014) introduced a classifier into the VAE architecture.",
"Hu et al. (2017) further involved two additional independent regularization terms in enhancing the disentanglement between structured code and unstructured code.",
"Very recently, Keskar et al. (2019) used human-defined control code to pre-trained Language Model in an unsupervised manner.",
"Our work falls in the category of semi-supervised learning yet differs from the existing works in the following ways: (1) Our model decouples the text generation module from the condition representation module which two are tightly fused as a single one in previous studies, enabling possible exploitation for pre-trained Language Models (e.g., GPT-2 (Radford et al., 2019)).",
"(2) Our model allows single-condition generation, which could inspire new applications like polite speech generator (Niu and Bansal, 2018) and data augmentation (Guo et al., 2018).",
"(3) Our model can handle emerging conditions while achieving state-of-the-art performance with fewer parameters and less training time.",
"Variational Auto-Encoder (VAE).",
"VAE (Kingma and Ba, 2015) is widely used in continuous generation (e.g., image generation).",
"Bowman et al. (2016) introduced VAE to NLG to solve the one-to-many generation problem (i.e., generating multiple feasible samples for the same input).",
"Given a latent variable z randomly sampled from a prior distribution, VAE comprises an encoder enc ( x ) = q ( z | x ) and a decoder dec ( z ) = p ( x | z ) .",
"The encoder aims to encode input data x into latent space Z R d .",
"The decoder is used to reconstruct the original input x , given the corresponding z .",
"Thus, the loss function of VAE is formulated as: LVAE ( x ) = E q ( z | x ) [log p ( x | z )] + KL( q ( z | x ) (cid:107) p ( z )) (1) where KL( || ) is the Kullback-Leibler (KL) divergence, p ( z ) = N (0 , 1) is the prior distribution.",
"The first term ensures that VAE can distill compact variable z in latent space for reconstruction.",
"The second term pushes posterior distribution to be close to the prior distribution, securing the mutual information between original data and the latent space (Dupont, 2018).",
"Conditional Text Generation with VAE.",
"Conditional text generation has drawn much attention recently.",
"By controlling the properties of generated contents, we can apply the generative models to many real-world scenarios.",
"We follow the problem setting in (Hu et al., 2017).",
"Given a set of k conditions C = { c 1 , c 2 , ..., c k } , an unlabeled corpus X , and conditional text samples Y = Y 1 Y 2 ... Y k where each Y i is a set of text samples that carries the condition c i .",
"The goal of a VAE model is to learn a decoder p ( y | z, c i ) that takes the latent variable z and the condition c i to calculate the distribution over the text samples Y i .",
"Thus, when the condition c i and a randomly sampled latent variable z p ( z ) specified, the model could generate realistic text samples matching the given condition.",
"As a basis for semi-supervised learning, a large unlabeled corpus should include diverse text which covers a vast spectrum of conditions.",
"Thus, text under each condition forms a conditional latent space, which could be mapped from a larger global latent space.",
"Based on this, we propose a PRETRAINVAE and a PLUGINVAE to derive the global and conditional latent space, respectively.",
"PRETRAINVAE.",
"The encoder and decoder of PRETRAINVAE are used to encode and generate text, respectively.",
"As discussed above, PRETRAINVAE is trained on a large amount of unlabeled text to derive the global latent space Z g for the latent variable z g , where Z g R d g and d g is the space dimension.",
"Previous studies usually use a common VAE for text representation and generation.",
"However, as pointed out in (Bowman et al., 2016), VAE suffers the notorious posterior collapse problem.",
"To address this, we utilize Wasserstein Autoen-coder (WAE) (Tolstikhin et al., 2018) for PRETRAINVAE.",
"Different from the original VAE, WAE encourages aggregated posterior distribution to be close to the prior, which is effective in alleviating the reconstruction problem of VAE (Tolstikhin et al., 2018).",
"Specifically, we adopt WAE-GAN, a variant of WAE, which incorporates the merits of adversarial learning.",
"During training, the encoder enc g ( x ) = q g ( z g | x ) encodes the text to the latent space and the decoder dec g ( z g ) = p g ( x | z g ) reconstruct the text with the latent variable z g .",
"Thus, the loss function of PRETRAINVAE is formulated as: LPRETRAINVAE ( x ) = E q g ( z g | x ) [log p g ( x | z g )] + D ( Q ( z g ) , p ( z g )) (2) where Q ( z g ) = (cid:82) q g ( z g | x ) p ( x ) dx is the aggregated posterior distribution; p ( z g ) is the prior normal distribution; D is the adversarial discriminator; is the coefficient hyper-parameter ( > 0 ).",
"PLUGINVAE.",
"For each condition, we use a condition-specific PLUGINVAE to derive the conditional space.",
"That is, PLUGINVAE is proposed to learn the transformation between the conditional and global latent space for each condition.",
"Specifically, for each condition c i , we use a limited number of conditional samples y i and utilize the global encoder enc g to encode them into v y i .",
"Note that normally, the encoded text samples under a single condition are not likely to densely clustered in the global text space Z g , since the learning process of Z g is condition-independent and the unlabeled corpus contains diverse text samples.",
"PLUGINVAE for condition c i consists of an encoder enc c i ( v y i ) = q c i ( z c i | v y i ) and a decoder dec c i ( z c i ) = p c i ( v y i | z c i ) .",
"The learned condition-dependent latent space is Z c i R d c , where d c is the space dimension.",
"Thus, PLUGINVAE is capable of mapping the samples in the global latent space to and from a denser conditional latent space (i.e., d c < d g ).",
"During training, the loss function of PLUGINVAE for a single condition is defined as: L single ( v y i ) = E q ( z ci | v yi ) [log p c i ( v y i | z c i )] + | (KL( q c i ( z c i | v y i ) (cid:107) p ( z c i )) | (3) where p ( z c i ) is the prior normal distribution of the conditional latent space; z c i is the latent variable; v y i = enc g ( y i ) is encoded text samples from Y i .",
"To enhance the diversity of generated text, we introduce an extra constant term to control the amount Reconstruction",
"of encoded information in VAE (Dupont, 2018; Chen et al., 2018; Kim and Mnih, 2018).",
"By setting to an appropriate value, PLUGINVAE could extract compact conditional information without sacrificing the fluency or accuracy.",
"Although we can already generate conditional text under a single condition by Equation 3, it is possible to even further improve the conditionality by introducing negative samples.",
"We construct the negative samples y (cid:48) i from Y (cid:48) i and encode them: Y (cid:48) i = Y Y i v (cid:48) y i = enc g ( y (cid:48) i ) (4) Thus, the loss function of PLUGINVAE with negative samples is defined as: LPLUGINVAE ( v y i , v (cid:48) y i ) = L single ( v y i ) L single ( v (cid:48) y i ) (5) where v y i is a batch of encoded samples under condition c i , and v (cid:48) y i is a batch of encoded negative samples; is a hyper-parameter balancing the positive and negative samples.",
"For different tasks, the best setting for may vary.",
"Intuitively, the larger the difference between the conditions is, the smaller should be.",
"In this section, we provide the details of training and generation procedures.",
"As illustrated in Figure 1, the workflow is composed of three steps.",
"Pre-train once, infer everywhere.",
"First, as shown in Figure",
"1(a), using the unlabeled corpus X , we pre-train PRETRAINVAE to learn the global latent space Z g by reconstruction with Equation 2.",
"Once pre-trained, the weights of both enc g and dec g are fixed.",
"As an unsupervised VAE model, PRETRAINVAE is capable of generating diverse but unconditional text.",
"Train it when you need it.",
"Previous methods (Kingma et al., 2014; Hu et al., 2017) learn the joint conditional space by jointly considering all conditions.",
"However, once the model is trained, it is not possible to add a new condition without a full retraining.",
"Different from those approaches, PPVAE is totally flexible that allows adding new conditions.",
"Shown in Figure",
"1(b), once a condition is added, we only need to train a PLUGINVAE specifically for this condition with Equation 3 (or Equation 5, if provided with samples of other conditions).",
"Since PLUGINVAE is text-irrelevant and only learns to map between two latent spaces, the training number of parameters is only 0 .",
"34% (see Section 6.3) of fine-tuning PRETRAINVAE or retraining other models.",
"Additionally, although we need to train k PLUGINVAE for k conditions, the total number of trained parameters is still much smaller than existing methods (unless k > 1 / 0 . 34% 294 , which is impossible in actual applications).",
"Plus, we can parallel the conditional training to speed up the process easily.",
"Plug it in and generate.",
"Shown in Figure",
"1(c), once PLUGINVAE for the condition c i is trained, we can plug it into the PPVAE framework and generate text together with PRETRAINVAE.",
"First, we randomly sample a latent variable z c i from the prior distribution p ( z c i ) = N (0 , 1) .",
"Then we use PLUGIN VAE's decoder dec c i to map z c i to the global latent space Z g and obtain z (cid:48) c i : z (cid:48) c i = dec c i ( z c i ) .",
"(6) Since z (cid:48) c i Z g , we can directly use the global decoder dec g to generate text: y i = dec g ( z (cid:48) c i ) (7) where y i is the generated text under condition c i .",
"Following the setting of (Hu et al., 2017), we mainly focus on short text generation (no longer",
"than 15 tokens), which is easier for both automatic and human evaluations.",
"We use Yelp (Shen et al., 2017) and News Titles (Fu et al., 2018) for experiments.",
"Yelp is a collection of restaurant reviews.",
"We use the pre-processed version used in (Shen et al., 2017), where two polarity sentiment labels are provided.",
"For News Titles, we choose the titles belong to Business , Entertainment and Health categories for our experiments.",
"Both Yelp and News Titles are datasets with relatively short text.",
"We filter out text longer than 15 words, then choose the top 8,900 and 10,000 words as the vocabulary for Yelp and News Titles, respectively.",
"The statistics of the two datasets are listed in Table 1.",
"We discard the labels in the original training and validation splits.",
"We use the original training split as the unlabeled corpus; the validation split to select the best unsupervised models, and the test split as the labeled conditional text.",
"Based on the Yelp dataset, we define two tasks: (1) Sentiment.",
"This task aims at generating text samples, either positive or negative.",
"The ratio of positive/negative text in Yelp is roughly 0 .",
"6 : 0 .",
"4 .",
"We randomly sample 200 positive and 200 negative text for supervised training.",
"(2) Length.",
"This task aims at generating text samples with a specific length.",
"We define ( len 3 ) as short text, ( len 12 ) as long text and ( 3 < len < 12 ) as medium text.",
"We respectively sample 200 text for short, medium, and long text for supervised training.",
"Based on the News Titles dataset, we define the categorical text generation task called Topic.",
"This task aims at generating text samples on a certain topic.",
"The ratio of business/health/entertainment in News Title is 0 .",
"38 : 0 .",
"15 : 0 .",
"47 , which is more imbalanced than Yelp.",
"We randomly sample 200 text for each category for supervised learning.",
"We use two semi-supervised methods, S-VAE (Kingma et al., 2014) and CTRL-GEN (Hu et al., 2017) as our baselines.",
"S-VAE incorporates a classifier to provide conditional distribution for unlabeled data.",
"Note that S-VAE is originally proposed for image generation but adapted to text generation as a baseline by Hu et al. (2017).",
"CTRL-GEN further exploits several regularization terms to enhance the disentanglement between the structured code and the unstructured code.",
"For a fair comparison, both the text encoder and decoder of the two baselines are the same as PRETRAINVAE.",
"Furthermore, the baseline methods also exploit the same unlabeled corpus X and labeled corpus Y as described in the original papers.",
"PPVAE is a model-agnostic approach, which means that both the encoders and encoders of PRETRAINVAE and PLUGINVAE can be modified to work under different settings.",
"Here, we describe the model architecture used in our experiments.",
"PRETRAINVAE.",
"For the encoder, we use a one-layer Bidirectional Gated Recurrent Unit (Bi-GRU) with 256 hidden units in each direction as its encoder.",
"Two linear Fully-Connected (FC) layers are used for re-parameteristic trick (Kingma and Welling, 2014).",
"For the decoder, we use a Transformer (Vaswani et al., 2017) ( 3 layers, 8 heads).",
"Additionally, we add extra positional embedding after each block, and the linearly transformed encoded vector is provided as input for each block (Brock et al., 2019).",
"For a fair comparison, we use the same encoder-decoder architecture for both S-VAE and CTRL-GEN.",
"PLUGINVAE.",
"The encoder is a two-layer FC network of 64/32 hidden units taking input in d g dimensions with an additional linear output layer of d c units.",
"The decoder is a two-layer FC network of 32/64 hidden units taking the latent variable in d c dimensions as input with a linear output layer of d g units.",
"The activation function used in the FC networks is LeakyRelu (Maas et al., 2013).",
"PRETRAINVAE.",
"The size of latent space d g is set to 128 .",
"The word embedding is in 256 dimensions and randomly initialized.",
"The output softmax matrix is tied with the embedding layer.",
"For the adversarial classifier, we adopt two 128D hidden FC layers with LeakyRelu activation and one 1D output linear layer without bias.",
"The balance coefficient is 20 for Yelp and 15 for News Titles.",
"We train the WAE-GAN with Wasserstein Divergence (Wu et al., 2018) to smooth the training process.",
"The coefficient k and power p of Wasserstein Divergence Task Conditions Method Accuracy Log-Variance Distinct-1 Distinct-2 ( better) ( better) ( better) ( better) Sentiment { Positive, Negative } S-VAE 0.7194 -5.38 0.0198 0.2520 CTRL-GEN 0.6998 -2.78 0.0026 0.0164 PPVAE-single (ours) 0.7832 -11.12 0.0350 0.2568 PPVAE (ours) 0.8484 -11.90 0.0356 0.2627 Length { Short, Medium, Long } S-VAE 0.8598 -4.82 0.0187 0.1795 CTRL-GEN 0.3957 -1.96 0.0021 0.0146 PPVAE-single (ours) 0.9640 -6.96 0.0375 0.2549 PPVAE (ours) 0.9722 -7.64 0.0372 0.2538 Topic { Business, Health, Entmt.",
"are set to 2 and 6, respectively.",
"During pre-training, the batch size is set to 512.",
"Adam (Kingma and Ba, 2015) with beta 1 = 0 is used as the optimizer.",
"The learning rate is set to 5 10 4 .",
"PLUGINVAE.",
"We set the size of latent space d c = 20 .",
"is set to 0.1 for sentiment tasks, 0.05 for categorical tasks, and 3 10 3 for length tasks.",
"The batch size is set to 128.",
"Adam (Kingma and Ba, 2015) with beta 1 = 0 .",
"5 is used as the optimizer, learning rate is 3 10 4 for 20K iterations.",
"linearly increases from 0 to 5 in first 10K iterations.",
"Metrics.",
"We evaluate the results with two metrics, accuracy and diversity.",
"For accuracy , we train a sentiment classifier and categorical classifier (Kim, 2014), which could achieve accuracy of 90% and 97% on the validation set, respectively.",
"The accuracy of length task can be directly calculated with the word count of generated text.",
"Plus, a model that performs well on only one condition but poorly on others is not practically useful.",
"Thus, to measure the robustness among conditions, we calculate the variance of accuracy under all conditions in a task.",
"For diversity , we adopt Distinct-1 and Distinct-2 (Li et al., 2016) metrics.",
"Distinct-1/Distinct-2 are the ratios of unique 1-gram/2-gram, respectively.",
"A higher value indicates better diversity.",
"For all tasks and models, we randomly generate 10K text for each condition by greedy decoding and report the averaged results.",
"Single Condition Generation.",
"In a real-world scenario, the full set of conditions is not always available.",
"When provided only a labeled set of target text (i.e., k = 1 ), it is not possible to learn the joint conditional space for S-VAE and CTRL-GEN any more.",
"However, PPVAE can deal with that by training without negative samples using Equation 3.",
"Accuracy.",
"The results of conditional text generation are listed in Table 2.",
"On sentiment task, our model outperforms CTRL-GEN and S-VAE by 0.1486 and 0.129, respectively.",
"On length task, the accuracy of our model exceeds 95% , dramatically outperforming S-VAE and CTRL-GEN by 0.1124 and 0.5765 on accuracy.",
"Notably, the performance of CTRL-GEN (0.3957) is extremely low, demonstrating the limitation of its generator-discriminator (Goodfellow et al., 2014) training process and its token-based discriminator, which is unable to discriminate text with different lengths.",
"On topic task, our model scores higher on accuracy than S-VAE and CTRL-GEN by 0.1094 and 0.2689, respectively.",
"On all three tasks, PPVAE-single performs slightly poorer than PPVAE with negative samples, verifying the effectiveness of negative sampling.",
"Furthermore, our models achieve the lowest variance on all three tasks, indicating that PPVAE is robust and achieves a good balance among conditions.",
"Diversity.",
"Diversity is a long-lasting issue lying in the field of generative models.",
"Recent works (Wang et al., 2017; Razavi et al., 2019) reveal the capability of the diverse content generation with Task Method Fluency Conditionality Sentiment S-VAE 3.10 3.04 CTRL-GEN 3.65 3.23 PPVAE-single 3.54 3.23 PPVAE 3.30 3.29 Length S-VAE 3.64 0.8598 CTRL-GEN 2.53 0.3597 PPVAE-single 3.43 0.9640 PPVAE 3.50 0.9722 Topic S-VAE 3.31 2.78 CTRL-GEN 3.09 2.51 PPVAE-single 3.38 3.33 PPVAE 3.45 3.57 Table 3: Human evaluation results.",
"VAE-based methods.",
"These works also conclude that VAE-based methods have better output diversity than GAN-based models.",
"Our experimental results support this conclusion well.",
"Particularly, CTRL-GEN suffers poor diversity, which indicates the generation of dull text (Li et al., 2016).",
"Both S-VAE and PPVAE show prominently better diversity than GAN-based model, CTRL-GEN.",
"Note that the relation between the usage of negative examples and text diversity of PPVAE is not statistically prominent ( p > 0 . 05 ).",
"We conduct human annotations as a complementary evaluation beyond automatic metrics.",
"Specifically, eight individual judges are asked to rate over 200 conditional samples generated from each model and each condition.",
"That is, for each model, a total of 4 , 800 text samples are annotated.",
"A judge needs to rate fluency and conditionality in the standard 1 to 5 scale.",
"Fluency measures whether the text samples are natural and fluent as real (i.e., human-written) ones.",
"Conditionality indicates whether the generated text adheres to the given condition.",
"Shown in Table 3, PPVAE achieves the best conditionality in both automatic and human evaluations on all three tasks.",
"Meanwhile, PPVAE retains a satisfying fluency on sentiment and length tasks and obtains the best fluency on the topic task.",
"To measure the efficiency of proposed methods, we report the training time and the number of parameters of S-VAE, CTRL-GEN and PPVAE in Table 4.",
"We train the models on a single Nvidia Method # Training Params Training Time S-VAE 6.5M 1.4h CTRL-GEN 8.5M 3.5h PRETRAINVAE 6.5M 1.2h (only once) PLUGINVAE 22K 64s Table 4: Average numbers of parameters and time costs for training.",
"GTX 1080 GPU and report the training time until the convergence of each model.",
"PRETRAINVAE has the same size of S-VAE but only needs to be trained once and does not require a full retraining when a new condition added.",
"Also, PLUGINVAE, which learns to transform between the global latent space and the conditional latent space, only has 22K parameters and can be trained within about one minute.",
"As a natural baseline, the conditional generation can also be done by directly fine-tuning PRETRAINVAE on each condition.",
"Shown in Table 5, despite the fact that it is not computationally efficient and saving the full weights is undesirable for industrial applications when the model is large (e.g., GPT-2 (Radford et al., 2019)), both PLUGINVAE trained with and without negative samples significantly outperform a directly fine-tuned PRETRAINVAE on accuracy.",
"Since is an important hyper-parameter for PPVAE, we test { 0 , 2 , 5 , 10 } on the long text generation task.",
"From the results in Table 6, we find that controls the balance between diversity and accuracy.",
"Specifically, when is too large, more diverse samples could be generated, but the accuracy may be sacrificed slightly.",
"On the contrary, when is too small, the accuracy could climb to a higher value, but meanwhile, the diversity drops drastically.",
"Empirically, we find that = 5 is an appropriate value for all tasks.",
"We select some generated conditional text of each condition in Table 7.",
"As shown in the table, our proposed PPVAE is capable of generating realistic conditional text.",
"Also, shown in Table 8, on topic task, we randomly select some examples from the output of each model.",
"The output of S-VAE seems to be diverse but is poorly conditioned.",
"CTRL-GEN suffers an obvious diversity issue, which makes it repeatedly output similar text.",
"For the error analysis, we pick some failed examples of PPVAE in Table 9.",
"We categorize the errors into two main classes.",
"(1) Grammatical .",
"Grammatical problems are common in NLG.",
"As we analyze, this kind of errors can be mitigated with a deeper encoder and decoder with even more unlabeled data for pre-training.",
"(2) Conditional .",
"Conditional errors are of great interest to us since they lie in our focus.",
"We choose three typical errors and list them in Table 9.",
"In the first sentence, shocked is a subtle word which may indicate either positive or negative sentiment depending on the context.",
"Thus, with a greedy decoding strategy, it may be incorrectly decoded into the other polarity.",
"We believe this kind of errors could be fixed with more elaborate decoding strategies (e.g., Weighted Decoding (See et al., 2019)).",
"In the second sentence, the length is limited by the nature of an interrogative sentence.",
"As a linguistic fact, an interrogative sentence often has fewer words than a declarative sentence.",
"In the third sentence, we remark an overlapping problem between classes.",
"Some topics (e.g., music album) may appear in both business and entertainment news.",
"In some way, these samples can also be considered as correctly conditioned ones, which highlights the importance of a fine-grained human evaluation on this task.",
"In this paper, we present a novel PPVAE framework for flexible conditional text generation, which decouples the text generation module from the condition representation module.",
"The extensive experiments demonstrate the superiority of the proposed PPVAE against the existing alternatives on conditionality and diversity while allowing new conditions to be added without a full retraining.",
"We are grateful for the insightful comments from the anonymous reviewers.",
"We would like to especially thank Daya Guo for his help and suggestions.",
"This research was supported by National Natural Science Foundation of China (No. 61872278).",
"Chenliang Li is the corresponding author."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"In this paper, we study two issues of semantic parsing approaches to conversational question answering over a large-scale knowledge base: (1) The actions defined in grammar are not sufficient to handle uncertain reasoning common in real-world scenarios.",
"(2) Knowledge base information is not well exploited and incorporated into semantic parsing.",
"To mitigate the two issues, we propose a knowledge-aware fuzzy semantic parsing framework (KaFSP).",
"It defines fuzzy comparison operations in the grammar system for uncertain reasoning based on the fuzzy set theory.",
"In order to enhance the interaction between semantic parsing and knowledge base, we incorporate entity triples from the knowledge base into a knowledge-aware entity disambiguation module.",
"Additionally, we propose a multi-label classification framework to not only capture correlations between entity types and relations but also detect knowledge base information relevant to the current utterance.",
"Both enhancements are based on pre-trained language models.",
"Experiments on a large-scale conversational question answering benchmark demonstrate that the proposed KaFSP achieves significant improvements over previous state-of-the-art models, setting new SOTA results on 8 out of 10 question types, gaining improvements of over 10% F1 or accuracy on 3 question types, and improving overall F1 from 83.01% to 85.33%.",
"The source code of KaFSP is available at https: //github.com/tjunlp-lab/KaFSP .",
"With the growing popularity of intelligent virtual assistants (e.g., Alexa, Siri, Cortana) and the availability of large-scale knowledge bases (e.g., DBPe-dia (Auer et al., 2007), Wikidata (Vrandecic and Krtzsch, 2014), YAGO (Rebele et al., 2016)), conversational question answering (QA) over knowledge bases (KB) has attracted broad interests.",
"It * Corresponding author.",
"aims to satisfy users' information needs by retrieving answers from a given knowledge graph to users' questions in a multi-turn conversational setting with a wide range of discourse phenomena (e.g., ellipsis, coreference, lexical cohesion).",
"While conversational QA over large-scale KBs can be realized without explicit semantic parsing (e.g., HRED-KVM (Kacupaj et al., 2021)), the majority of effort is dedicated to the exploration of contextual semantic parsers (Guo et al., 2018; Shen et al., 2019; Thirukovalluru et al., 2021; Kacupaj et al., 2021; Lan and Jiang, 2021).",
"The semantic parsing based approaches usually project an utterance into a logical form that can be executed on a given knowledge base.",
"Early semantic parsing method D2A (Guo et al., 2018) suffers from the stepwise error propagation issue, which is improved by MaSP (Shen et al., 2019) that jointly learns pointer-equipped semantic parsing and type-aware entity detection in a multi-task learning framework.",
"The very recent work LASAGNE (Kacupaj et al., 2021) further enhances MaSP via a graph attention network that exploits the correlation (missing in MaSP) between entity types and relations and achieves the state-of-the-art results on the CSQA benchmark (Saha et al., 2018).",
"Despite the aforementioned progress, we argue that current semantic parsing approaches to conversational QA over large-scale KBs still suffer from two critical issues.",
"First, grammar rules that form the base for the mapping of questions to logical forms, although being constantly updated in D2A, MaSP, and LASAGNE, are still not sufficient to cover all real-world situations, e.g., fuzzy inference on numbers.",
"Consider the question \" Which nutrients can interact with approximately 89 chemical substances and drugs? \".",
"It is difficult for existing grammar to represent \" approximately 89 \".",
"Second, the interaction between questions and knowledge base is not adequate for entity disambiguation and redundancy detection in semantic parsing.",
"For the 461 question \" Which educational institution is the alma mater of Pierre Lefebvre? \", without using relevant information from KB, it is difficult for semantic parsing to distinguish whether \" Pierre Lefebvre \" is a French military physician or a French politician as more than one persons named \" Pierre Lefebvre \" are in the knowledge base.",
"To address these two issues, we propose a K nowledgea ware F uzzy S emantic P arsing (KaFSP) model to enhance both grammar rules and the interaction between KB and semantic parsing.",
"Particularly, we introduce fuzzy operations into the grammar system used in previous work, enabling the system to perform uncertainty reasoning on numbers.",
"Such updates have a significant impact on answering quantitative and comparative questions.",
"In order to make the knowledge base well facilitate semantic parsing, we incorporate deep entity knowledge in the given knowledge base into different modules in the proposed semantic parsing framework.",
"In the entity disambiguation module, entity triples from the knowledge base are exploited to disambiguate candidate entities.",
"In the entity type and relation prediction module, a multi-label classification framework is proposed to capture correlations between entity types and relations and to pinpoint KB information relevant to the current utterance.",
"Contributions Our main contributions are as follows: We propose a knowledge-aware fuzzy semantic parsing framework for conversational QA over large-scale KBs, which enables the grammar system to model uncertainty reasoning based on the fuzzy set theory, and enhances the interaction between KB and semantic parsing with two knowledge-aware modules.",
"Experiment results demonstrate that our proposed model achieves new state-of-the-art results on 8 out of 10 question types on the CSQA dataset (Saha et al., 2018), which is to date the largest dataset for complex conversational question answering over a large-scale knowledge base.",
"Semantic parsing approaches have conventionally been used for knowledge base question answering (KBQA).",
"Early efforts parse natural language questions into logical forms typically via dictionary-based parsers or similarity models (Wong and Mooney, 2007; Zettlemoyer and Collins, 2007, 2009; Kwiatkowski et al., 2011; Andreas et al., 2013; Artzi and Zettlemoyer, 2013; Reddy et al., 2014; Zhao and Huang, 2015; Dubey et al., 2016; Long et al., 2016).",
"Recent years have witnessed that semantic parsing has been shifted from traditional statistical models with feature engineering to neural approaches that learn continuous representations for generating logical forms (Yih et al., 2014; Jia and Liang, 2016; Xiao et al., 2016; Bao et al., 2016; Dong and Lapata, 2018, 2016; Bhutani et al., 2020; Lan and Jiang, 2020, 2021).",
"For example, Dong and Lapata (2016) use the encoder-decoder framework equipped with a neural attention mechanism to cast semantic parsing into Seq2Seq generation.",
"As knowledge bases are becoming large, semantic parsing for KBQA is usually performed in a stepwise, modular framework.",
"Guo et al. (2018) recognize entities in questions and link them to the given large-scale knowledge graph at the first stage and then learn to map the entity-linked questions into logical forms.",
"Dong and Lapata (2018) propose a coarse-to-fine two-stage decoding method for semantic parsing, which generates a coarse sketch for a question with low-level features at the first stage and then continues to decode the final logical form based on the output of the first stage as well as the question itself.",
"As mentioned in Section 1, such stepwise methods are confronted with error propagation across stages (e.g., from entity linking to mapping, from coarse parse to fine parse).",
"In order to alleviate such problem, Shen et al. (2019) and Kacupaj et al. (2021) use a multi-task learning framework to jointly learn entity detection, linking, and semantic parsing in a single model.",
"Kacupaj et al. (2021) also use a graph attention network (Velickovic et al., 2018) to explore entity type and relation information in the knowledge base.",
"Due to the superiority of multi-task learning for semantic parsing tailored for KBQA, our work is also based on the multi-task learning framework.",
"However, our model is significantly different from existing works in both fuzzy grammar rules and knowledge-aware entity disambiguation together with entity type and relation prediction.",
"We use a multi-task learning framework to map an input (current question concatenated with context) into a logical form where entities are detected and linked to the given knowledge base.",
"Figure 1 shows the architecture of KaFSP.",
"The backbone network of KaFSP follows LASAGNE (Kacupaj et al., 2021) consisting of a seq2seq network, an entity recognition module and a graph attention network module (Section 3.2).",
"Our contributions lie in the fuzzy grammar (Section 3.1), the knowledge-aware entity disambiguation module (Section 3.3), and the entity type and relation prediction module (Section 3.4).",
"The two knowledge-aware modules are shown in the black dashed box in Figure 1.",
"In semantic parsing approaches tailored for conversational KBQA, a grammar with the minimum number of actions is usually defined to construct KB-executable logical forms (i.e., semantic parse trees).",
"The actions defined in the previous grammar system (Guo et al., 2018; Kacupaj et al., 2021) are all deterministic operations.",
"However, vague and fuzzy questions are common in real-world scenarios, e.g., \" How many works of art did approximately the same number of people do the dubbing for as Another ? \", which cannot be answered by previous deterministic grammars.",
"The grammar of LASAGNE includes an action termed \"approx\", which aims to perform the operation of \"approx-imately equal to\".",
"However, how two numbers are measured to be roughly equal to each other is not defined.",
"Therefore, we take the grammar of LASAGNE as a starting point for building our own grammar and add fuzzy actions to the grammar to make it to adapt to real-world vague questions mentioned above.",
"The new grammar is briefly summarized in Table 1.",
"We further give a \"precise\" (measurable) definition for these added fuzzy actions based on the fuzzy set theory (Zadeh, 1965).",
"For a number a , we define its fuzzy set as A = { x, ( x ) | x R } .",
"( x ) is the membership function of set A , which indicates the degree of similarity between x and a , and is defined based on a generalized bell-shaped membership function as: ( x ) = 1 1 + | x a c | 2 b , (1) where c R and b N + .",
"When ( x ) = 1 , x and a are strictly equal; and when ( x ) = 0 , x and a are strictly not equal.",
"A = { ( x ) > | x R } , A = { x > a | x R } A , A = { x < a | x R } A .",
"When ( x ) > , then x A , which denotes that x and a is approximately equal to each other.",
"When x A , x is considered to be greater than or approximately equal to a .",
"When x A , x is considered to be less than or approximately equal to a .",
"It is worth noting that all the parameters in Eq.",
"(1) and the threshold can be flexibly predefined, which makes our grammar adjustable to different fuzzy scenarios.",
"We follow the multi-task learning framework of LASAGNE (Kacupaj et al., 2021) to build the backbone network for our KaFSP.",
"Encoder and Decoder The skeleton of the entire model is a Transformer-based encoder-decoder network.",
"The input x fed into the encoder is formed in a way similar to LASAGNE, which is composed of the previous question, the answer to the previous question, and the current question separated by a symbol \"[SEP]\".",
"A special token \"[CTX]\" is appended to the input for encoding the input representation h enc ctx , as shown in Figure 1.",
"Both the encoder and decoder use a two-layer multi-head attention Transformer block, which can be formulated as: h enc = encoder( x ; enc ) , z dec = decoder( h enc ; dec ) , P ( y dec | x ) = (cid:89) t softmax( W dec z dec t ) , (3) where z dec t R | V dec | is the hidden state of the decoder at time step t , and W dec is the linear projection matrix at the targe side.",
"The key task of the decoder is to generate an action (listed in Table",
"1) at each time step to obtain the logical form y dec corresponding to the input x .",
"Entity Recognition Inspired by Shen et al. (2019), we jointly detect entities and their types in a BIO sequence labeling way.",
"The labels for the input sequence x are in { O, { B, I } { T i } N tp 1 } .",
"T i stands for the i -th entity type label, and N tp denotes the number of the distinct entity types in the knowledge base.",
"An LSTM network, stacked over the encoder, is used to perform the sequence labeling task.",
"To make the outputs of the sequence 464 labeling task compatible with logical forms, we follow LASAGNE to use a feedforward layer stacked over the LSTM layer.",
"The entire module of entity recognition is hence formulated as follows: h LSTM = LSTM( h enc ; LSTM ) , h FFN = LeakyReLU( WFFN 1 [ h enc ; h LSTM ]) , P ( y ER | x ) = (cid:89) t softmax( WFFN 2 h FFN t ) , (4) where h LSTM is the LSTM hidden state at time step t , h FFN is the FFN-transformed version of h LSTM , and P ( y ER | x ) denotes the probability distribution over entity tags.",
"Graph Attention Network (GAT) We follow LASAGNE to use the GAT module to learn the correlations between entity types and their relations in the knowledge base.",
"It can be defined as: h GAT = GAT( e node ; GAT ) , (5) where e node are the embeddings of nodes in the type-relation graph constructed from the knowledge base.",
"Please refer to Kacupaj et al. (2021) for more details on the GAT module.",
"In a large-scale knowledge base, it is common that entities with different meanings share the same surface forms.",
"Predicting entity types could help differentiate them.",
"However, when candidates have both the same type and surface form, it is difficult for entity type prediction to distinguish them again.",
"In order to address this issue, we incorporate more information about these ambiguous entities from the knowledge base to disambiguate them.",
"We model the entity disambiguation problem as a binary classification problem: y = f ( c , s, K ( e )) , (6) where s is the surface form of a candidate entity e , c is the context where e occurs, and K ( e ) denotes relevant information of the candidate entity e from the knowledge base.",
"If y = 1 the entity e is disambiguated and linked to the true entity in the knowledge base defined by K ( e ) .",
"The purpose of this is to maximize both the true positive and true negative.",
"We define the context of e as the entire input x .",
"To define K ( e ) , we use all triples that are relevant to e in the knowledge base, regardless of whether the entity is a subject or an object in triples.",
"That is, K ( e ) is an ordered set of KB triples.",
"Each triple in K ( e ) can be formulated as ( e h , r, e t ) , where the candidate entity e is either the head entity ( e h ) or tail entity ( e t ).",
"In Eq.",
"(6), f is the classifier to disambiguate candidate entities.",
"We use a pre-trained language model XLNet (Yang et al., 2019) fine-tuned in the training dataset as the classifier.",
"In order to feed s , c , and K ( e ) into the pretrained and fine-tuned classifier, we reorganize them into a concatenated textual sequence, with components be separated by the token \"[SEP]\".",
"KB triples are all instantiated with corresponding words in the knowledge base, where e h , r , and e t are separated by blanks.",
"We use the top 3 triples in K ( e ) and feed them into the classifier, where the triples are sorted by their IDs.",
"Such a choice is a trade-off between knowledge graph coverage and memory consumption in practice.",
"If the number of relevant triples retrieved from the knowledge base is less than 3, we use the candidate entity itself to fill in the empty triples.",
"This module mainly performs two subtasks: the unified recognition of entity types and relations, and the KB-guided prediction of correct entity types and relations stacked over the first subtask, as shown in the Type & Relation Prediction module in Figure 1.",
"Let G E R E denote the knowledge base, where E is the entity set and R is the relation set.",
"Each entity e E has an entity type T (entity type set).",
"We model the type and relation recognition subtask as a multi-label classification task and use a classifier to predict the probability of an output sequence from a given input sequence.",
"To obtain neural representations of both entity types and entity relations for the recognition subtask, we use a pre-trained language model BERT (Devlin et al., 2019).",
"The input fed into BERT is formed in a way similar to the entity disambiguation module.",
"The difference is that we replace the entity with its entity type.",
"Formally, the neural representation e of an entity type is computed as follows: e = BERT [CLS] ([CLS] s ( )[SEP] K ( )[SEP]) , where [CLS] indicates that we use the representation of the prepended artificial [CLS] token as the 465 representation of the entity type , s ( ) and K ( ) represent the surface form and triples of , respectively.",
"Similarly, the neural representation e r of a relation is formulated as: e r = BERT [CLS] ([CLS] s ( r )[SEP] K ( r )[SEP]) .",
"Kacupaj et al. (2021) find that modeling the correlations between entity types and relations is crucial for semantic parsing.",
"In our KaFSP, we use a single classifier to predict both entity types and relations, instead of using two separate classifiers that share no common information (Shen et al., 2019; Kacupaj et al., 2021).",
"Hence, the prediction space of our classifier is T R , and the correlations between types and relations are naturally captured in the same single classifier.",
"We use a sigmoid function to output probabilities as follows: P ( y MLC | x ) = Sigmoid( h encctx WMLC ( e r ) ) , (7) where WMLC R |T R| d is a linear projection matrix, and e r are the concatenation of the embeddings of T and r R .",
"The KB-guided prediction of entity types and relations is actually to make final decisions on them with relevant information from the knowledge base.",
"Since KB contains a lot of triples irrelevant to the current utterance u , in order to make the knowledge graph embedding provide the information related to u , we use the output probabilities from the proposed multi-label classifier to pinpoint relevant information from the knowledge base encoded by GAT.",
"Particularly, we calculate the Hadamard product of P ( y MLC | x ) and h GAT : h MLC = WTRP ( h GAT P ( y MLC | x )) , (8) where WTRP R 2 d d is a linear projection matrix.",
"Given the hidden states of the decoder z dec and last hidden state of the encoder h encctx , we use a feedforward network to predict the sequence of types and relations: P ( y TRP | x ) = (cid:89) t softmax(( h MLC ) FFN( h encctx ; z dec t )) , (9) where FFN( h encctx ; z dec t ) is the projection of the concatenation of the context representation and the hidden state of the decoder at time step t .",
"Before training KaFSP, we use weak supervisions (only the final answers) to obtain golden standard logical forms of questions in the training set through BFS, following Guo et al. (2018).",
"In KaFSP, we have 6 subtasks: the encoder-decoder subtask (DEC), the entity recognition subtask (ER), the filtering and permutation subtask from LASAGNE (FP), the multi-label classification subtask (MLC) described in Section 3.4, the type and relation prediction subtask (TRP) and the entity disambiguation subtask (ED).",
"We a mixed training strategy to train these subtasks.",
"The first 5 subtasks are jointly trained in a multi-task learning way while the last subtask is separately trained.",
"Reasons for this strategy are twofold:",
"1) Entity disambiguation is a relatively independent subtask compared with other subtasks.",
"2) We fine-tune a huge pre-trained language model XLNet (Yang et al., 2019) on this subtask.",
"Direct incorporation of the fine-tuning procedure into multi-task learning may make it difficult for the entire model to converge.",
"where M = { DEC , ER , FP , MLC , TRP } is the set of subtasks and s are the weights of these subtasks, which are learned during training.",
"In learning these weights, we take into account the difference in magnitude among the 5 losses according to the log standard deviation (Kendall et al., 2018).",
"LDEC , LER , LFP and LTRP are the negative log-likelihood losses of 4 subtasks, which are defined as follows: LDEC = m (cid:88) k =1 log P ( y DEC k | x ) , LER = n (cid:88) j =1 log P ( y ER j | x ) , LFP = n (cid:88) j =1 log P ( y FP j | x ) , LTRP = m (cid:88) k =1 , y dec k P log P ( y TRP k | x ) , (11) where n and m are the length of the input utterance x and the golden standard logical form, respec-466 tively.",
"P is the set of placeholders for relations and types.",
"y are ground-truth labels for corresponding subtasks.",
"The loss for the multi-label classification LMLC is a binary cross-entropy loss, defined as: LMLC = 1 l l (cid:88) i =1 y MLC i log( P ( y MLC i | x )) + y i MLC log( P ( y i MLC | x )) , (12) where l is the size of T R , y MLC = 1 y MLC , y MLC is defined in Eq.",
"(7).",
"The entity disambiguation is trained separately, and its loss function is defined as: LED = (cid:88) e E x y ED i log( P ( y ED i | x )) + y i ED log( P ( y i ED | x )) , (13) where E x is the set of entities that appear in x and y ED is defined in Eq.",
"To train this subtask, we retrieve all entities that are present in the current input from the knowledge base.",
"Note that we only construct 500,000 and 40,000 samples respectively for training and validation of the entity disambiguation module.",
"The grammar defined in Table 1 is used to guide the decoding step.",
"The decoder generates a sequence mixed with actions and placeholders.",
"Placeholders are instantiated with specific entities, types, relations, and numbers.",
"The decoding process for a logical form terminates when no nonterminals remain.",
"After decoding, we use a shift-reduce method to check the logical form sequence and delete or correct wrong placeholders.",
"Once the BIO tags and entity types are identified, entity spans can be located from the input utterance.",
"We search from the inverted index constructed for the knowledge base for each predicted entity span to obtain an entity candidate list.",
"After filtering the retrieved entity candidate list according to the corresponding entity type, if there are still multiple candidate entities, the entity disambiguation module is activated to calculate the conditional probability of each candidate entity.",
"The candidate entity with the highest probability is selected.",
"Finally, we use the relation and type prediction results and disambiguated entities to instantiate the placeholders to get final logical forms.",
"We carried out experiments and analyses to validate the effectiveness of the proposed KaFSP.",
"Dataset We evaluated the proposed model on the CSQA dataset (Saha et al., 2018), a standard dataset for complex sequential question answering.",
"The dataset is composed of 200K dialogues with 1.6M turns, and over 12.8M entities from Wikidata, where 153K, 16K, and 28K dialogues are used for training, verification, and test, respectively.",
"The questions cover a wide range of linguistic phenomena, such as co-reference, ellipsis, and reasoning.",
"Evaluation Metrics We used the same evaluation metrics as Saha et al. (2018).",
"When answers are composed of one or more entities, F1 score is used as the evaluation metric.",
"When answers are a Boolean value or number, accuracy is used as the metric.",
"Following previous works (Guo et al., 2018; Shen et al., 2019; Kacupaj et al., 2021), we also calculated overall scores for all types of questions under each evaluation metric.",
"Baselines We compared KaFSP against 5 state-of-the-art baselines on the CSQA.",
"The first baseline is HRED+KVM (Saha et al., 2018), which combines the HRED model with the key-value memory network.",
"The other four baselines are D2A (Guo et al., 2018), MaSP (Shen et al., 2019), KISP (Thirukovalluru et al., 2021), LASAGNE (Kacu-paj et al., 2021), which achieve state-of-the-art results on different types of questions on the CSQA dataset.",
"More details for model settings can be found in Appendix A. 5.2 Results Table 2 shows experiment results on the CSQA dataset.",
"Our model outperforms LASAGNE on all types of questions and achieves new SOTA results in 8 out of 10 question types.",
"Additionally, our model outperforms all previous baselines in terms of \"overall\" results.",
"For question types that involve one or more entities, namely Logical Reasoning (All) , Simple Question (Direct) , and Verification (Boolean) , the improvements over LASAGNE on these question types are 3.14%, 2.78%, and 1.29% respectively.",
"This is mainly because we have added a knowledge-aware entity disambiguation module to improve the accuracy of entity linking.",
"For question types Clarification , Comparative Reasoning (All) , and Comparative Reasoning (Count) , they usually involve multiple entity types and relations.",
"KaFSP achieves huge improvements of 11.91%, 16.23%, and 19.45% on these question types over LASAGNE.",
"This is mainly due to fuzzy comparison rules in the new grammar system and the proposed knowledge-aware type and relation prediction module.",
"The module benefits from the multi-label classification with a single classifier that not only helps to capture correlations between entity types and relations but also pinpoints and incorporates only relevant information from the knowledge base into relation and type prediction, which makes the predictions of types and relations more accurate.",
"Our model does not outperform previous SOTA results on only 2 question types, i.e., Simple Question (Co-referenced) and Simple Question (Ellip-sis) .",
"Although KaFSP is lower than KISP on these two question types, it is 0.55% and 1.66% higher than LASAGNE.",
"We conjecture that the reasons for being not superior to KISP on these question types are twofold.",
"First, spurious logical forms may have a negative impact on the decoder when it is trained on data indeed with false logical forms.",
"Second, in conversational QA, not only entities but also entity relations can be omitted in questions.",
"For example, \" How many people acted as an influence on Thomas Aquinas? And also tell me about Walt Whitman? \".",
"In KaFSP, we replace the real ID of an omitted entity with \"previous-entity\".",
"However, this strategy is not used for omitted relations when producing logic forms, which may have neg-Methods KaFSP w/o Fuzzy w/o ED w/o MLC Question Type F1 score Clarification 81.37% 69.96% 79.44% 79.17% Comparative 86.00% 70.55% 85.88% 85.65% Logical 92.97% -90.03% 89.60% Quantitative 93.74% 86.64% -93.32% Simple(Coref) 79.61% -77.94% 77.28% Simple(Direct) 90.73% -88.13% 88.19% Simple(Ellipsis) 81.75% -80.34% 79.05% Question Type Accuracy Verification 80.15% -78.15% 79.02% Quantitative 61.23% 57.74% 59.36% 59.46% Comparative 72.79% 54.55% 72.39% 71.93% Table 3: Ablation Study.",
"ative impacts on the two question types mentioned above.",
"Furthermore, although KaFSP increases the number of parameters, most added parameters are from the pretrained XLNet (base) model included for entity disambiguation.",
"This does not have a big impact on the inference speed of KaFSP compared to LASAGNE.",
"Table 3 summarizes experiment results of ablation study on our major contributions: fuzzy grammar, the knowledge-aware entity disambiguation module, and the multi-label classification framework.",
"We observe that all three key components make substantial contributions to our proposed model.",
"For the ablation study on the entity disambiguation module, we compared KaFSP against \"w/o ED\" that directly selects the first entity from the ordered candidate list retrieved from the knowledge 468 Question Type Disamb.",
"base as the disambiguated entity.",
"When ED module isn't used, candidate entities are sorted lexicographically by their IDs.",
"This was done to be consistent with previous approaches in our baselines.",
"We find that for all types of questions, the application of the proposed knowledge-aware ED improves the results to various degrees.",
"This is because entity ambiguity is present in a wide range of questions.",
"For Simple Question (Direct) questions, our further analysis shows that 14.11% of entities are updated by our knowledge-aware ED, which leads to an improvement of 2.60%.",
"Both natural language questions and the knowledge base contain information that can be used to disambiguate entities.",
"The proposed knowledge-aware ED incorporates both types of information for disambiguation.",
"Table 4 shows the total number of entities, the total number of disambiguated entities, and their proportions in the logical forms of different types of questions.",
"It can be seen that overall, the disambiguated entities account for 13.98%.",
"For Logical Reasoning and Verification questions where the proportion of disambiguated entities is relatively high, correspondingly, the improvements achieved by adding the entity disambiguation module is high.",
"This further validates the effectiveness of the proposed entity disambiguation module.",
"Similarly, our ablation study validates the effectiveness of both the fuzzy grammar and the knowledge-aware multi-label classification (case study on the multi-label classification can be found in Appendix B).",
"For error analysis, we randomly sampled 100 incorrect predictions and summarized the following",
"Entity Ambiguity (54%) Although our entity disambiguation model can achieve a prediction accuracy of 95.16%, ambiguous entities still exist in some questions.",
"Take the question \"What lead to the death of Jerry Stephenson?\" as an example.",
"Both entity Q6184489 and Q100927364 are found in the knowledge base, which matches the surface form \" Jerry Stephenson \".",
"However, it is difficult to determine whether the real entity in the question is Q100927364 (college basketball player (19711971) Austin Peay) or Q6184489 (Ameri-can baseball player) with only information of three triples and insufficient context.",
"Spurious Logical Forms (6%) Similar to previous works (Shen et al., 2019; Kacupaj et al., 2021), we find that our model can infer correct answers even with wrong \"ground-truth\" logical forms generated with the algorithm taken from previous work (Guo et al., 2018).",
"This will affect the overall performance of the model.",
"Such a phenomenon is especially common in complex reasoning questions.",
"In this paper, we have presented a knowledge-aware fuzzy semantic parsing framework KaFSP for conversational question answering over a large-scale knowledge base.",
"KaFSP defines fuzzy comparison actions in grammar based on the fuzzy set theory to cover approximately comparative reasoning.",
"In addition to this, we propose two knowledge-aware components in KaFSP to incorporate information from the knowledge base for entity disambiguation and entity type & relation prediction.",
"Experiment results demonstrate that KaFSP is substantially better than all previous state-of-the-art models, setting new SOTA results on 8 out of 10 question types on the CSQA dataset and achieving over 90% F1 or accuracy in 3 question types for the first time.",
"The present research was supported by Zhejiang Lab (No. 2022KH0AB01) and Huawei (No. TC20210528011).",
"We would like to thank the anonymous reviewers for their insightful comments.",
"We also want to thank MindSpore 1 for the partial suppoort of this work, which is a new deep learning computing framework."
] | [
"method",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"Mixed counting models that use the negative binomial distribution as the prior can well model over-dispersed and hierarchically dependent random variables; thus they have attracted much attention in mining dispersed document topics.",
"However, the existing parameter inference method like Monte Carlo sampling is quite time-consuming.",
"In this paper, we propose two efficient neural mixed counting models, i.e., the Negative Binomial-Neural Topic Model (NB-NTM) and the Gamma Negative Binomial-Neural Topic Model (GNB-NTM) for dispersed topic discovery.",
"Neural variational inference algorithms are developed to infer model parameters by using the reparameterization of Gamma distribution and the Gaussian approximation of Poisson distribution.",
"Experiments on real-world datasets indicate that our models outperform state-of-the-art baseline models in terms of perplexity and topic coherence.",
"The results also validate that both NB-NTM and GNB-NTM can produce explainable intermediate variables by generating dispersed proportions of document topics.",
"Mixture modeling is an essential topic in statistics and machine learning areas, owing to generating the random probability measure of data samples belonging to multiple clusters.",
"In unsupervised learning tasks such as topic discovery, mixture modeling has gained increasing attention from researchers (Wang et al., 2011; Zhou and Carin, 2012, 2015; Zhou, 2018; Zhao et al., 2019).",
"Specifically, mixture modeling over document words devotes to assign these words to different topics via random probability measures.",
"Hierarchical Dirichlet The first two authors contributed equally to this work which was finished when Jiemin Wu was an undergraduate student of his final year.",
"Process (HDP) (Teh et al., 2004) is one of the representative methods in mixture modeling, which can characterize the two-level dependency of random probability measures.",
"Although we can use Monte Carlo sampling or variational inference to estimate the parameters in HDP, it requires the help of indirect construction of random variables such as the Chinese Restaurant Franchise (Teh et al., 2004) or the Stick-Breaking construction (Wang et al., 2011) due to the lack of conjugation between the two-tier Dirichlet processes.",
"This makes the inference of HDP mostly complicated (Zhou et al., 2016).",
"The mixed counting models represented by the Negative Binomial (NB) process (Titsias, 2007) and the Gamma Negative Binomial (GNB) process (Zhou and Carin, 2012) have solved this problem to a certain extent, in which, the normalized GNB process has been proven to be equivalent to HDP (Zhou and Carin, 2012).",
"Because both NB and GNB processes satisfy the properties of completely random measures (Charles Kingman, 1967), the generative process of random probability measures among various mixed components is independent and becomes straightforward.",
"Moreover, they naturally introduce non-negative constraints and have been proven as able to model over-dispersed data.",
"In the case of mining latent topics of documents, the over-dispersed property indicates that the variance is larger than the mean for document-topic distributions.",
"When compared to the NB process, the GNB process has an extra feature of describing more flexible stochastic phenomena with hierarchical dependencies.",
"Despite the above advantages, with the increase of data size and observable information, the aforementioned parameter inference method like Monte Carlo sampling or variational inference has gradually become an important factor limiting the usage scenarios of mixed counting models (Miao et al., 2016).",
"The reason is that Monte Carlo sampling has a high computational cost, and variational inference becomes intractable when applied to models with complex variable dependencies (Acharya et al., 2015).",
"Neural variational inference (NVI) is a flexible and fast parameter inference framework based on neural networks (Mnih and Gregor, 2014).",
"It can be regarded as a generalization of variational auto-encoder applicable to natural language processing tasks.",
"Based on NVI, several neural topic models had been proposed and achieved encouraging performance in document modeling (Miao et al., 2016; Srivastava and Sutton, 2017; Miao et al., 2017).",
"These models used the neural network to learn the distribution relationship between input documents and latent topics due to its excellent function fitting ability and scalability.",
"Particularly, the neural network parameters can be trained by back-propagation through the reparameterization of a continuous distribution (Naesseth et al., 2017) or using variance reduction techniques for a discrete distribution (Mnih and Gregor, 2014).",
"However, the hidden variables in the above neural topic models lack good interpretability, and it is also impossible to model over-dispersed and hierarchically dependent document sets for these methods.",
"In this paper, we propose two novel neural mixed counting models dubbed the Negative Binomial-Neural Topic Model (NB-NTM) and the Gamma Negative Binomial-Neural Topic Model (GNB-NTM) based on NB and GNB processes, respectively.",
"The general motivation is to combine the advantages of NVI and mixed counting models.",
"On the one hand, NVI-based models are fast and easy to estimate but hard to interpret.",
"On the other hand, document modeling via mixed counting models is easy to interpret but difficult to infer.",
"In our NB-NTM and GNB-NTM, we develop NVI algorithms to infer parameters by using the reparameterization of Gamma distribution and the Gaussian approximation of Poisson distribution.",
"Extensive experiments on real-world datasets validate the effectiveness of our proposed models in perplexity, topic coherence, and dispersed topic learning.",
"Furthermore, the proposed models can describe the hierarchical dependence of random probability measures and introduce non-negative constraints, which renders the intermediate variables generated by our methods to have good interpretability.",
"The remainder of this article is organized as follows.",
"In Section 2, we summarize the related studies on topic discovery.",
"In Section 3, we introduce the definitions and properties of background methods.",
"The proposed models are described in Section 4, the experimental evaluations are shown in Section 5, and we draw the conclusions in Section 6.",
"Topic discovery aims to use the statistical information of word occurrences to obtain the abstract semantic structure embedded in a document set.",
"From Bayesian methods represented by latent semantic analysis (LSA) (Deerwester et al., 1990), probabilistic latent semantic analysis (PLSA) (Hof-mann, 1999), latent Dirichlet allocation (LDA) (Blei et al., 2003), and Hierarchical Dirichlet Process (HDP) (Teh et al., 2004), topic discovery had been widely researched in natural language processing and applied to many scenarios.",
"For instance, the above models were extended to capture topic relevance (Blei and Lafferty, 2005) and topic evolution over time (Wang and McCallum, 2006; Blei and Lafferty, 2006).",
"Algorithms for short text (Yan et al., 2013), tagged data (Ramage et al., 2009), and stream data (Yao et al., 2009) were also proposed.",
"Considering the importance of prior distributions in LDA-based models, some research efforts tried to use beta and Gaussian distributions instead of the Dirichlet distribution as the prior of probabilistic graphical models (Thibaux and Jordan, 2007; Das et al., 2015).",
"Although the Bayesian method is a natural way to represent the latent structure of a document set in topic discovery, as the structure of such a model becomes deeper and more complex, pure Bayesian inference becomes intractable due to the high dimensional integrals required (Miao et al., 2016).",
"To address this issue, Cheng and Liu (2014) proposed a parallel Monte Carlo sampling method for HDP based on multi-threading.",
"Unfortunately, it needs to traverse every word of all topics (i.e., threads) in the whole corpus when updating the topic-word distribution, rendering a large time cost for thread communication.",
"With the development of deep learning, especially the introduction of NVI, there is a new direction to discover topics based on neural networks.",
"For example, Miao et al. (2016) assumed that word distributions in each document could be represented by hidden variables sampled from multiple Gaussian distributions, and they used the variational lower bound as the objective function of their model named NVDM.",
"Srivastava and Sutton (2017) employed the logical Gaussian distribution to approximate the Dirichlet distribution, which improved the variational auto-encoder and LDA simultaneously.",
"Miao et al. (2017) proposed a method named GSM to model the document-topic distribution explicitly.",
"In their study, the topic-word distribution was introduced into the decoder.",
"Besides the above NVI-based methods, Nalisnick and Smyth (2017) developed a stick-breaking variational auto-encoder for image generation.",
"Nan et al. (2019) proposed a model named W-LDA in the Wasserstein auto-encoder framework.",
"They employed the Maximum Mean Discrepancy (MMD) in W-LDA to match the proposed distribution and the prior distribution.",
"However, the accuracy of MMD relied heavily on the number of samples for each distribution, and the kernel function in MMD had a significant in-fluence on the performance.",
"By leveraging word embeddings, Gupta et al. (2019) proposed a neural autoregressive topic model dubbed iDocNADE to enrich the context of short text.",
"Experiments indicate that iDocNADE outperformed state-of-the-art generative topic models.",
"The recent relevant work to ours is the method proposed in (Zhao et al., 2019), which regarded the NB distribution as the prior in modeling the over-dispersed discrete data.",
"However, the parameters of this method were still derived from the latent variables that obey the Gaussian distribution.",
"Thus, these latent variables do not satisfy the non-negative constraint and lack good interpretability.",
"Furthermore, the above method did not model topics explicitly, making it hard to generate document-topic and topic-word distributions.",
"Let X NBP( G 0 , p ) denote a NB process defined on the product space R + , where G 0 is a finite continuous basic measure on a completely separable measure space , and p is a scale parameter.",
"For each Borel set A , we use X ( A ) to denote a count random variable describing the number of observations that reside within A .",
"Then, X ( A ) obeys the NB distribution NB( G 0 ( A ) , p ) .",
"Given the k th component k and its weight r k on , if G 0 is expressed as G 0 = (cid:80) k =1 r k k , where is the Dirac delta function, then X NBP( G 0 , p ) can be expressed by X = (cid:80) k =1 n k k , where n k NB( r k , p ) .",
"The NB distribution m NB( r, p ) has a probability density function f M ( m ) = ( r + m ) m",
"!( r ) (1 p ) r p m , where ( ) denotes the gamma function.",
"For the above probability density function, the mean and the variance are = r/ (1 p ) and 2 = rp/ (1 p ) 2 = + r 1 2 , respectively.",
"Because the mean is smaller than the variance, i.e., the variance-to-mean ratio is greater than 1, NB distributions have shown great advantages in overdispersed data modeling (Zhou and Carin, 2012).",
"Moreover, since the NB distribution m NB( r, p ) can be extended to a Gamma distribution and a Poisson distribution, i.e., m Poisson( ) and Gamma( r, p/ (1 p ) , the NB process mentioned earlier can be extended to a Gamma-Poisson process (Zhou and Carin, 2015) as follows: X PP () , and GaP ( G 0 , (1 p ) /p ) , where PP( ) and GaP( ) denote the Poisson process and the Gamma process, respectively.",
"The random probability measure corresponding to each mixed component in the NB process can be directly sampled from the NB distribution without resorting to the Chinese Restaurant Franchise, the Stick-Breaking, or other construction methods, because each random measure is independent of the others, i.e., the NB process is completely random.",
"In the NB process, each Poisson process shares the same Gamma process prior with a fixed mean.",
"Based on the NB process, the GNB process assigns another Gamma process as a prior to its mean, making it easier to model over-dispersed data (Zhou et al., 2016).",
"Particularly, the generative process of random variables for the GNB process is as follows: G GaP ( G 0 , ) , j GaP ( G, (1 p j ) /p j ) , and X j PP ( j ) , where j is the subset index, is the scale parameter of the first-level Gamma process, and the basic measure G 0 in the NB process is replaced by another random measure G .",
"It has been shown that HDP is a normalized form of the GNB process in (Zhou and Carin, 2012).",
"However, unlike HDP, the GNB process explicitly introduces the parameter p j to control the dispersion degree of instantaneous measurement, making the latter model more flexible.",
"NVI is often used as an efficient parameter inference framework for complex and deep-seated structural models.",
"Inspired by the variational auto-encoder, NVI assumes that the observed data d is subject to a certain probability distribution determined by a hidden variable h .",
"In contrast to variational auto-encoders on handling the case of continuous latent variables (Kingma and Welling, 2014), NVI can deal with both discrete and continuous latent variables.",
"Specifically, a neural network is used to infer the proposed distribution q ( h | d ) .",
"As stated in (Miao et al., 2017), Monte Carlo estimates of the gradient must be employed for models with discrete latent variables.",
"In the case of q ( h | d ) being continuous, the hidden variable h is firstly obtained by sampling from q ( h | d ) through the corresponding reparameterization approach.",
"Then, the likelihood p ( d | h ) is used to reproduce the observed data from hidden variables, and the objective is to minimize the Kullback-Leibler (KL) divergence of the proposed distribution and the actual posterior distribution.",
"Finally, the variational lower bound is obtained by L = E q ( h | d ) log p ( d | h ) DKL [ q ( h | d ) (cid:107) p ( h )] , where the first term is the expectation of the log-likelihood, and the second one is the KL divergence between the inferred distribution and a predefined prior.",
"To sum up, NVI first uses a neural network to infer the proposed distribution q ( h | d ) , and then maximizes the variational lower bound by backpropagation to fit the actual posterior distribution p ( h | d ) .",
"Such a framework learns the distribution of input data well, enabling it to combine with the traditional probability graphical models (e.g., LDA) and infer model parameters quickly (Srivas-tava and Sutton, 2017).",
"However, how to effectively integrate the distributed dependencies in mixed counting models into the framework of variational inference is still quite a challenging problem.",
"In this section, we respectively detail our NB-NTM and GNB-NTM for dispersed topic discovery.",
"With a NB process prior, we propose the NB-NTM to model the counting of document words.",
"Furthermore, a novel NVI framework is developed for parameter inference.",
"Let D = { d 1 , ..., d | D | } be the input with | D | documents and each document d RV be a bag-of-words representation, where V is the vocabulary size.",
"Since it is impossible to draw all the countably infinite atoms of a Gamma process, we first employ the finite truncation strategy, in which, a number of topics K (i.e., the truncated level) is set manually (Nalisnick and Smyth, 2017; Zhou, 2018).",
"Note that although K is fixed, if K is set to be large enough, not necessarily all topics would be used and hence a truncated model still preserves its nonparametric ability; whereas if K is set to be small, asymmetric priors on the topic weights are also maintained (Zhou, 2018).",
"Then we can express the generative process of NB-NTM for document d as follows: r = f 1 ( d ) , p = f 2 ( d ) , (1) Gamma ( r , p / ( 1 p )) , (2) n Poisson ( ) , (3) where f 1 ( ) and f 2 ( ) are two multilayer percep-trons (MLPs) applying to generate the variational parameters r and p .",
"Specifically, r is the component weight of G , i.e., the topic measure at the corpus level, and G = (cid:80) Kk =1 r k k .",
"represents the weights of topics at the document level, which can be used to estimate the topic measure on d by = (cid:80) K k =1 k k .",
"In the above, k denotes the k th component of .",
"Finally, n is the component weight of that represents a Poisson process at the word level, and = (cid:80) Kk =1 n k k .",
"The framework of NB-NTM is shown in Figure 1, and the parameter inference process is described as follows.",
"For the logarithmic likelihood of each document d , we can derive the variational lower bound by L = DKL ( q ( | d ) || p ( )) + E q ( | d ) (cid:104)(cid:80) N d i =1 log p ( i | ) (cid:105) .",
"In the above, q ( | d ) is the encoder's inference of posterior probability, i.e., Gamma( r , p ) , i RV is the one-hot representation of the word at the i th position, N d is the number of words in document d , and p ( ) is the Gamma prior for , i.e., Gamma( , c ) .",
"The KL divergence between q ( | d ) and p ( ) , i.e., Gamma( r , p ) and Gamma( , c ) , is calculated by following (Mathiassen et al., 2002): DKL ( q ( | d ) || p ( )) = (cid:80) Kk =1 [( r k 1)( r k ) log p k r k log ( r k ) ( 1)(( r k )+log p k )+ log ( )+ log c + r k p k c ] , where ( ) is the Digamma function.",
"The conditional probability over each word p ( i | ) is modeled by softmax function, as follows: p ( i | ) = exp { ( n TR i + b i ) } (cid:80) Vj =1 exp { ( n TR j + b j ) } , where R and b denote the weight matrix and the bias term, respectively.",
"We present the parameter inference process of NB-NTM in Algorithm 1, in which, the variational lower bound L is used to calculate gradients and model parameters are updated by Adam (Kingma and Ba, 2015).",
"Based on the NB-NTM, we further propose the GNB-NTM by assigning another Gamma process as a prior to the NB process.",
"As shown in Figure 2, the generative process of GNB-NTM for document d is given below: = f 1 ( d ) , = f 2 ( d ) , (4) r Gamma ( , ) , (5) Gamma ( r , p/ (1 p )) , (6) p = f 3 ( d ) , n Poisson ( ) .",
"(7) In the above, and are the parameters of the first-level Gamma process, and p is the scale parameter of the second-level Gamma process.",
"The differences between GNB-NTM and NB-NTM are three-fold.",
"Firstly, another Gamma process G 0 is introduced over the existing Gamma process G as a prior of its shape parameter, so as to characterize the multi-level dependencies of random variables.",
"In particular, G 0 = (cid:80) Kk =1 k k .",
"Secondly, a scale parameter p is introduced for each document to describe the dispersion degree of all words in the document.",
"Thirdly, the GNB-NTM employs n + r as the input of the decoder by following the production rule of the observed variable in (Zhou and Carin, 2012).",
"Using n + r as the input also helps to incorporate the global topic information into the decoder's inference of posterior probability q ( r | d ) .",
"Thus, the conditional probability over each word p ( i | r ) is modeled as follows: p ( i | r ) = exp { ( ( n + r ) TR i + b i ) } (cid:80) Vj =1 exp { ( ( n + r ) TR j + b j ) } .",
"Similar to NB-NTM, the variational lower bound is derived by: L = E q ( r | d ) (cid:104)(cid:80) N d i =1 log p ( i | r ) (cid:105) DKL ( q ( r | d ) || p ( r )) , where p ( r ) is the Gamma prior for r , i.e., Gamma( , c ) .",
"The parameter inference for GNB-NTM is presented in Algorithm 2.",
"We use the variational lower bound to calculate gradients and apply Adam to update parameters of GNB-NTM, which are the same as NB-NTM.",
"up-Algorithm 2: Parameter Inference for GNB-NTM",
"date model parameters through back-propagation.",
"Here, we describe the reparameterization approach for smoothing gradients.",
"For the Gamma distribution x Gamma( , ) with > 1 , the reparameterization can be obtained by the reject-sampling method (Naesseth et al., 2017), i.e., x = 1 (cid:0) 13 (cid:1) (cid:16) 1 + 9 3 (cid:17) 3 , (cid:15) N (0 , 1) .",
"Besides, the shape augmentation method (Naesseth et al., 2017) is applied to convert 1 to > 1 to increase the accept rate of each rejection sampler.",
"For the Poisson distribution which is discrete, we use the Gaussian distribution as an approximation (Rezende et al., 2014; Kingma and Welling, 2014).",
"Based on the central limit theorem, N ( = , 2 = ) can approximate Poisson( ) .",
"Thus, we sample from the Poisson distribution directly to avoid the issue of discretization and use the Gaussian distribution as an approximation when calculating the Poisson distribution's gradient.",
"Particularly, the reparameterization of a Gaussian distribution x N ( , 2 ) is x = + (cid:15) , (cid:15) N (0 , 1) .",
"We employ the following three datasets to evaluate the effectiveness of our models: Reuters 1 , 20News, and MXM song lyrics (Miao et al., 2017).",
"The Reuters dataset contains 7,758 training documents and 3,005 testing documents.",
"The 20News corpus consists of 18,773 news articles under 20 categories.",
"These news articles are divided into 11,268 training documents and 7,505 testing documents.",
"The 20 categories include sports, electronics, automotive, and so forth, and the number of documents under each category is almost the same.",
"MXM is the offi-cial lyrics collection of the Million Song Dataset, which contains 210,519 training documents and 27,143 testing documents, respectively.",
"By following (Miao et al., 2017), we use the originally provided vocabulary with 5,000 words for MXM, while for Reuters and 20News, we use stemming, stop words filtering, and the 2,000 most frequently occurred words as vocabularies.",
"The statistics of these datasets are presented in Table 1.",
"The following models are adopted as baselines: HDP (Teh et al., 2004), NVDM (Miao et al., 2016), NVLDA and ProdLDA (Srivastava and Sutton, 2017), GSM (Miao et al., 2017), and iDocNADE (Gupta et al., 2019).",
"Among these baselines, HDP is a classical mixture modeling method followed 1 https://www.nltk.org/book/ch02.html the equivalence with the normalized GNB process (Zhou and Carin, 2012).",
"In HDP, the model parameters are estimated by Monte Carlo sampling.",
"NVDM, NVLDA, ProdLDA, and GSM are all neural topic models based on NVI.",
"Considering that word embeddings have shown to capture both the semantic and syntactic relatedness in words and demonstrated impressive performance in natural language processing tasks, we also present the result of a neural autoregressive topic model that leverages word embeddings (i.e., iDocNADE).",
"Particularly, the publicly available codes of HDP 2 , NVDM 3 , NVLDA and ProdLDA 4 , and iDocNADE 5 are directly used.",
"As an extended model of NVDM, the baseline of GSM is implemented by us based on the code of NVDM.",
"To ensure fair comparisons on various NVI-based methods, unless explicitly specified, we set the number of topics to 50, the hidden dimension of MLP to 256, and use one sample for NVI by following (Miao et al., 2017).",
"For the batch size, the learning rate, and other model parameters, grid search is carried out on the training set to determine their optimal values and achieve the held-out performance.",
"To evaluate the quality of topics generated by different models, we use perplexity and topic coherence as evaluation criteria.",
"The perplexity of each model on a testing set (cid:101) D is: perplexity ( (cid:101) D ) = exp (cid:16) 1 | (cid:101) D | (cid:80) (cid:101) d 1 N (cid:101) d log p ( (cid:101) d ) (cid:17) , where log p ( (cid:101) d ) represents the log-likelihood of the model on document (cid:101) d , and N (cid:101) d is the number of words in (cid:101) d .",
"The lower the perplexity is, the more likely for a model to generate (cid:101) D .",
"Therefore, if a model obtains a lower perplexity than others in the testing set, it can be considered as the better one.",
"For all NVI-based topic models, the variational lower bound, which is proven to be the upper bound of perplexity (Mnih and Gregor, 2014), is used to calculate the perplexity by following (Miao et al., 2016, 2017).",
"When calculating the topic coherence, we use the normalised pointwise mutual information (NPMI) which measures the relationship between word w i and other T 1 top words (Lau et al., 2014) as follows: NPMI ( w i ) = (cid:80) T 1 j =1 [log P ( w i ,w j ) P ( w i ) P ( w j ) / log P ( w i , w j )] .",
"The higher the value of topic coherence, the more explainable the topic is. 2 https://github.com/soberqian/ TopicModel4J 3 https://github.com/ysmiao/nvdm 4 https://github.com/akashgit/ autoencoding_vi_for_topic_models 5 https://github.com/pgcool/iDocNADEe 5.3 Performance Comparison Table 2 shows the perplexity and topic coherence of different models on the test datasets.",
"We can observe that NB-NTM outperforms most baselines, and GNB-NTM performs the best in all cases.",
"The results validate that the NB distribution can model over-dispersed documents well.",
"Furthermore, the latent semantics of these corpora may be hierarchically dependent.",
"In other words, the topics at the corpus level and those of each document are not independent but correlated with one another.",
"In terms of the model efficiency, neural topic models can be trained much faster than HDP on a large corpus by GPU acceleration.",
"Take the large-scaled MXM dataset as an example, the training time of both NB-NTM and GNB-NTM is around one hour using a GeForce GTX 960 GPU, while HDP needs more than three hours to converge using an AMD R5 3600 CPU.",
"Under the same environment, the training time of all NVI-based topic models is close.",
"In general, NVLDA, prodLDA, and NVDM run slightly faster than NB-NTM because the Gaussian reparameterization approach is simpler than the Gamma one.",
"GSM and GNB-NTM are slightly slower than others because the former introduces more parameters to model the topic-word distribution, while the latter introduces more sampling operations.",
"As an illustration, we also qualitatively evaluate the semantic information learned by different models on the 20News training set.",
"The baselines of HDP, NVLDA, and prodLDA, which achieve competitive topic coherence scores, are selected for comparison.",
"Table 3 presents 5 of the most representative topics with the corresponding top 10 words, from which we can observe that although all these models can identify the chosen topics reasonably, our NB-NTM and GNB-NTM perform better than the other baselines in most cases.",
"In this part, we test the impact of the number of topics on the performance of our models.",
"Figure 3 shows the convergence process of NB-NTM and GNB-NTM on the 20News training set with K = 20, 50, 100, 200 in terms of the perplexity.",
"We can observe that as K increases, the perplexity values of both models decrease under each epoch.",
"This is because the NVI framework is essentially an encoder-decoder, and the increase of the topic number enables the models to encode and reconstruct documents better.",
"We also notice that with the continuous growth of K , the improvement of perplexity is getting lower.",
"Table 4 presents the results of our models on the 20News testing set under the above conditions, in which a similar trend can be observed as aforementioned.",
"topics are dispersed, and thus, the intermediate variables can be more explainable.",
"To validate the effectiveness of our models on learning dispersed topics, we first count the total number of words under each manually labeled category (i.e., topic) as the topic-word number distribution shown in Figure 4",
"(a).",
"Then we run our NB-NTM and GNB-NTM on the entire 20News testing set to get the corresponding values of r .",
"After normalization, the proportion of different topics obtained by NB-NTM and GNB-NTM at the corpus level is presented in Figure 4",
"(b) and Figure 4",
"(c), respectively.",
"For the convenience of the result presentation, we set the number of topics to 20 for both models.",
"Note that the 20 topics do not need to correspond to the 20 categories, because we here focus on testing whether the topic proportions generated by our two models are in accordance with their model structures/characteristics.",
"From these results, we can observe that the proportion of topics obtained by NB-NTM is close to the topic-word number distribution.",
"On the other hand, GNB-NTM obtains more dispersed proportions of topics than NB-NTM.",
"These results suggest that GNB-NTM tends to allocate less but more important topics to the corpus, i.e., the topics generated by GNB-NTM are more discriminative.",
"Since the document-topic distribution is not directly modeled and the Gaussian distribution samples are not non-negative, the previous neural methods except GSM cannot obtain explainable intermediate variables.",
"For the baseline of GSM, Miao et al. (2017) had demonstrated that the topics with higher probabilities were evenly distributed on the same 20News dataset, which indicates that our models outperform GSM on learning dispersed document topics.",
"We also study the dispersion of intermediate variables (i.e., topics) at the document level.",
"By randomly select a document as an example, we get the normalized document topic weight from NB-NTM and GNB-NTM to explore whether the topic distributions of the document generated by our models are reasonable.",
"As shown in Figure 5, the document is about a standard computer, and the most related topics with large topic distributions are all related to computers, which validates the practical meaning of intermediate variables of both NB-NTM and GNB-NTM at the document level.",
"From the keywords in the most related topics, we further observe that GNB-NTM can identify more computer-related words than NB-NTM.",
"When compared to the whole semantic space as shown in Figure 4, both NB-NTM and GNB-NTM generate more dispersed proportions of topics at the document level.",
"This phenomenon is consistent with the over-dispersed feature (i.e., the variance is larger than the mean) of documents.",
"In this paper, we present two neural mixed counting models named NB-NTM and GNB-NTM.",
"Different from the current time consuming Bayesian methods, our models apply to large-scale datasets through the efficient back-propagation algorithm and GPU acceleration.",
"When compared to the existing neural topic models, both NB-NTM and GNB-NTM can well model the random variables with I have a Standard Computer 486DX2/66mhz EISA Tower with 16MB RAM, a Quantum 240MB Hard Drive, 1.2 and 1.44 MB floppies and a Colorado 250MB tape drive.",
"over-dispersed and hierarchically dependent characteristics.",
"Extensive experiments on real-world datasets validate the effectiveness of our models in terms of perplexity, topic coherence, and producing explainable intermediate variables by generating dispersed proportions of document topics.",
"The results also indicate that NB distribution families can characterize text data aptly, which is essentially due to their conformity with the over-dispersed and sparse properties of natural language.",
"We are grateful to the reviewers for their constructive comments and suggestions on this study.",
"This work has been supported by the National Natural Science Foundation of China (61972426), Guang-dong Basic and Applied Basic Research Foundation (2020A1515010536), HKIBS Research Seed Fund 2019/20 (190-009), the Research Seed Fund (102367), and LEO Dr David P. Chan Institute of Data Science of Lingnan University, Hong Kong.",
"This work has also been supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (UGC/FDS16/E01/19), Hong Kong Research Grants Council through a General Research Fund (project no. PolyU 1121417), and by the Hong Kong Polytechnic University through a start-up fund (project no. 980V)."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other"
] |
[
"The goal of conversational machine reading is to answer user questions given a knowledge base text which may require asking clarification questions.",
"Existing approaches are limited in their decision making due to struggles in extracting question-related rules and reasoning about them.",
"In this paper, we present a new framework of conversational machine reading that comprises a novel E xplicit M emory T racker (EMT) to track whether conditions listed in the rule text have already been satisfied to make a decision.",
"Moreover, our framework generates clarification questions by adopting a coarse-to-fine reasoning strategy, utilizing sentence-level entailment scores to weight token-level distributions.",
"On the ShARC benchmark (blind, held-out) testset, EMT achieves new state-of-the-art results of 74.6% micro-averaged decision accuracy and 49.5 BLEU4.",
"We also show that EMT is more interpretable by visualizing the entailment-oriented reasoning process as the conversation flows.",
"Code and models are released at https://github.com/ Yifan-Gao/explicit_memory_tracker .",
"Statutory Maternity Pay To qualify for SMP you must: * earn on average at least 113 a week * give the correct notice * give proof you're pregnant Do I qualify for SMP?",
"I've been old enough to get my pension.",
"Do you earn on average at least 113 a week?",
"Yes No Rule Text User Scenario Initial Question Turn 1 Turn 2 Turn 3 Yes No Irrelevant Inquire Did you give the correct notice?",
"Decision: Yes No Irrelevant Inquire Decision: Yes No Irrelevant Inquire Decision: No ## Taking more leave than the entitlement If a worker has taken more leave than they're entitled to, their employer must not take money from their final pay unless it's been agreed beforehand in writing.",
"In conversational machine reading (CMR), machines can take the initiative to ask users questions that help to solve their problems, instead of jumping into a conclusion hurriedly (Saeidi et al., 2018).",
"In this case, machines need to understand the knowledge base (KB) text, evaluate and keep track of the user scenario, ask clarification questions, and then make a final decision.",
"This interactive behavior between users and machines has gained more attention recently because in practice users are unaware of the KB text, thus they cannot provide all the information needed in a single turn.",
"For instance, consider the example in Figure 1 taken from the ShARC dataset for CMR (Saeidi et al., 2018).",
"A user posts her scenario and asks a question on whether her employer can take money from her final pay.",
"Since she does not know the relevant rule text, the provided scenario and the initial question(s) from her are often too underspecified for a machine to make a certain decision.",
"Therefore, a machine has to read the rule text and ask a series of clarification questions until it can conclude the conversation with a certain answer.",
"Most existing approaches (Zhong and Zettlemoyer, 2019; Sharma et al., 2019) formalize the CMR problem into two sub-tasks.",
"The first is to make a decision among Yes , No , Irrelevant , and Inquire at each dialog turn given a rule text, a user scenario, an initial question and the current dialog history.",
"If one of Yes , No , or Irrelevant is selected, it implies that a final decision ( Yes / No ) can be made in response to the user's initial question, or stating the user's initial question is unanswerable ( Irrelevant ) according to the rule text.",
"If the decision at the current turn is Inquire , it will then trigger the second task for follow-up question generation, which extracts an underspecified rule span from the rule text and generates a follow-up question accordingly.",
"However, there are two main drawbacks to the existing methods.",
"First, with respect to the reasoning of the rule text, existing methods do not explicitly track whether a condition listed in the rule has already been satisfied as the conversation flows so that it can make a better decision.",
"Second, with respect to the extraction of question-related rules, it is difficult in the current approach to extract the most relevant text span to generate the next question.",
"For example, the state-of-the-art E 3 model (Zhong and Zettlemoyer, 2019) has only 60.6% F1 for question-related span extraction.",
"To address these issues, we propose a new framework of conversational machine reading with a novel E xplicit M emory T racker (EMT), which explicitly tracks each rule sentence to make decisions and generate follow-up questions.",
"Specifi-cally, EMT first segments the rule text into several rule sentences and allocates them into its memory.",
"Then the initial question, user scenario, and dialog history are fed into EMT sequentially to update each memory module separately.",
"At each dialog turn, EMT predicts the entailment states (satisfac-tion or not) for every rule sentence, and makes a decision based on the current memory status.",
"If the decision is Inquire , EMT extracts a rule span to generate a follow-up question by adopting a coarse-to-fine reasoning strategy (i.e., weighting token-level span distributions with its sentence-level entailment scores).",
"Compared to previous methods which only consider entailment-oriented reasoning for decision making or follow-up question generation, EMT utilizes its updated memory modules to reason out these two tasks in a unified manner.",
"We compare EMT with the existing approaches on the ShARC dataset (Saeidi et al., 2018).",
"Our results show that explicitly tracking rules with external memories boosts both the decision accuracy and the quality of generated follow-up questions.",
"In particular, EMT outperforms the previous best model E 3 by 1.3 in macro-averaged decision accuracy and 10.8 in BLEU4 for follow-up question generation.",
"In addition to the performance improvement, EMT yields interpretability by explicitly tracking rules, which is visualized to show the entailment-oriented reasoning process of our model.",
"As illustrated in Figure 2, our proposed method consists of the following four main modules.",
"(1) The Encoding module uses BERT (Devlin et al., 2019) to encode the concatenation of the rule text, initial question, scenario and dialog history into contextualized representations.",
"(2) The Explicit Memory Tracking module sequentially reads the initial question, user scenario, multi-turn dialog history, and updates the entailment state of each rule sentence.",
"(3) The Decision Making module does entailment-oriented reasoning based on the updated states of rule sentences and makes a decision among Yes , No , Irrelevant , and Inquire .",
"(4) If the decision is Inquire , the Question Generation module is activated, which reuses the updated states of rule sentences to identify the underspecified rule sentence and extract the most informative span within it in a coarse-to-fine manner.",
"Then it rephrases the extracted span into a well-formed follow-up question.",
"Let x R , x Q , x S , [ x H, 1 , x H, 2 , ..., x H,P ] denote the input of rule text, initial question, user scenario, and P turns of dialog history, each of which is a sequence of tokens.",
"We first split the rule text x R into several rule sentences [ x R, 1 , x R, 2 , ..., x R,M ] according to sentence boundary or bullet points, insert [CLS] tokens at the start of each sentence, and concatenate them into one sequence: [ [CLS] , x R, 1 ; ... ; [CLS] , x R,M ; [CLS] , x Q ; [CLS] , x S ; [CLS] , x H, 1 ; ... ; [CLS] , x H,P ].",
"Then we use BERT (Devlin et al., 2019), a pretrained transformer encoder (Vaswani et al., 2017) [CLS] Tok 1 Tok 2 Tok n [CLS] Tok 1 Tok 2 Tok n [CLS] Tok 1 Tok 2 Tok n [CLS] Tok 1 [CLS] Tok 1 [CLS] Tok 1 [CLS] Tok 1 RuleSentence1 RuleSentence2 RuleSentence3 InitialQuestion Scenario Q 1 ,A 1 Q 2 ,A 2 Rule Text DialogHistory BERTTransformerEncoder k 1 u 1,1 u 1,2 u 1,n k 2 u 2,1 u 2,2 u 2,n k 3 u 3,1 u 3,2 u 3,n s Q s S s 1 s 2 Rule Sent.",
"We treat each [CLS] representation as feature representation of the sentence that follows it.",
"In this way, we receive both token-level representation and sentence-level representation for each sentence.",
"We denote sentence-level representation of the rule sentences as k 1 , ..., k M and their token-level representation as [( u 1 , 1 , ..., u 1 ,n 1 ) , ..., ( u M, 1 , ..., u M,n M )] , where n i is number of tokens for rule sentence i .",
"Similarly, we denote the sentence-level representation of the initial question, user scenario, and P turns of dialog history as s Q , s S , and s 1 , ..., s P , respectively.",
"All these vectorized representations are of d dimensions (768 for BERT-base).",
"Given the rule sentences k 1 , ..., k M and the user provided information including the initial question s Q , scenario s S , and P turns of dialog history s 1 , ..., s P , our goal is to find implications between the rule sentences and the user provided information.",
"Inspired by Recurrent Entity Network (Henaff et al., 2017) which tracks the world state given a sequence of textual statements, we propose the E xplicit M emory T racker (EMT), a gated recurrent memory-augmented neural network which explicitly tracks the states of rule sentences by sequentially reading the user provided information.",
"As shown in Figure 2, EMT explicitly takes rule sentences k 1 , ..., k M as keys, and assigns a state v i to each key to save the most updated entailment information (whether this rule has been entailed from the user provided information).",
"Each value state v i is initialized with the same value of its corresponding rule sentence: v i, 0 = k i .",
"Then EMT sequentially reads user provided information s Q , s S , s 1 , ..., s P .",
"At time step t , the value state v i,t for i -th rule sentence is updated by incorporating the user provided information s t { s Q , s S , s 1 , ..., s P } , v i,t = ReLU ( W k k i + W v v i,t + W s s t ) , (1) g i = ( s (cid:62) t k i + s (cid:62) t v i,t ) [0 , 1] , (2) v i,t = v i,t + g i (cid:12) v i,t R d , v i,t = v i,t (cid:107) v i,t (cid:107) (3) where W k , W v , W s R d d , represents a sigmoid function, and (cid:12) is scalar product.",
"As the user background input s t may only be relevant to parts of the rule sentences, the gating function in Equation 2 matches s t to the memory.",
"Then EMT updates state v i,t only in a gated manner.",
"Finally, the normalization allows EMT to forget previous information, if necessary.",
"After EMT sequentially reads all user provided information (the initial question, scenario, and P turns of history dialog) and finishes entailment-oriented reasoning, keys and final states of rule sentences are denoted as ( k 1 , v 1 ) , ..., ( k M , v M ) , which will be used in the decision making module (Section 2.3) and question generation module (Section 2.4).",
"The key difference between our Explicit Memory Tracker and Recurrent Entity Network (REN) (Henaff et al., 2017) is that each key k i in our case has an explicit meaning (the corresponding rule sentence) and thus it changes according to different rule texts while in REN, the underlined meaning of keys are learned through training and they are fixed throughout all textual inputs.",
"Moreover, the number of keys is dynamic in our case (according to the number of sentences parsed from the rule text) while that is predefined in REN.",
"Based on the most up-to-date key-value states of rule sentences ( k 1 , v 1 ) , ..., ( k M , v M ) from the EMT, the decision making module predicts a decision among Yes, No, Irrelevant , and Inquire .",
"First, we use self-attention to compute a summary vector c for the overall state: i = w (cid:62) [ k i ; v i ] + b R 1 (4) i = softmax ( ) i [0 , 1] (5) c = (cid:88) i i [ k i ; v i ] R d (6) where [ k i ; v i ] denotes the concatenation of the vectors k i and v i , and i is the attention weight for the rule sentence k i that determines the likelihood that k i is entailed from the user provided information.",
"Then the final decision is made through a linear transformation of the summary vector c : z = W z c + b z R 4 (7) where z R 4 contains the model's score for all four possible classes.",
"Let l indicate the correct decision, the decision making module is trained with the following cross entropy loss: L dec = log softmax ( z ) l (8) In order to explicitly track whether a condition listed in the rule has already been satisfied or not, we add a subtask to predict the entailment states for each rule sentence.",
"The possible entailment labels are Entailment , Contradiction and Unknown ; details of acquiring such labels are described in Section 3.1.",
"With this intermediate supervision, the model can make better decisions based on the correct entailment state of each rule sentence.",
"The entailment prediction is made through a linear transformation of the most up-to-date key-value state [ k i ; v i ] from the EMT module: e i = W e [ k i ; v i ] + b e R 3 (9) where e i R 3 contains scores of three entailment states [ entailment ,i , contradiction ,i , unknown ,i ] for the i -th rule sentence.",
"Let r indicate the correct entailment state.",
"The entailment prediction subtask is trained with the following cross entropy loss, normalized by the number of rule sentences M : L entail = 1 MM (cid:88) i =1 log softmax ( e i ) r (10) 2.4 Follow-up Question Generation When the decision making module predicts Inquire , a follow-up question is required for further clarification from the user.",
"In the same spirit of previous studies (Zhong and Zettlemoyer, 2019; Sharma et al., 2019), we decompose this problem into two stages.",
"First, we extract a span inside the rule text which contains the underspecified user information (we name it as underspecified span hereafter).",
"Second, we rephrase the extracted underspecified span into a follow-up question.",
"We propose a coarse-to-fine approach to extract the underspecified span for the first stage, and finetune the pretrained language model UniLM (Dong et al., 2019) for the follow-up question rephrasing, as we describe below.",
"Coarse-to-Fine Reasoning for Underspecified Span Extraction.",
"Zhong and Zettlemoyer (2019) extract the underspecified span by extracting several spans and retrieving the most likely one.",
"The disadvantage of their approach is that extracting multiple rule spans is a challenging task, and it will propagate errors to the retrieval stage.",
"Instead of extracting multiple spans from the rule text, we propose a coarse-to-fine reasoning approach to directly identify the underspecified span.",
"For this, we reuse the Unknown scores unknown ,i from the entailment prediction subtask (Eqn. 9), and normalize it (over the rule sentences) with a softmax to determine how likely that the i -th rule sentence contains the underspecified span: i = softmax ( unknown ) i [0 , 1] (11) Knowing how likely a rule sentence is underspecified greatly reduces the difficulty to extract the underspecified span within it.",
"We adopt a soft selection approach to modulate span extraction (i.e., predicting the start and end points of a span) score by rule sentence identification score i .",
"We follow the BERTQA approach (Devlin et al., 2019) to learn a start vector w s R d and an end vector w e R d to locate the start and end positions from the whole rule text.",
"The probability of j -th word in i -th rule sentence u i,j being the start/end of the span is computed as a dot product between w s and u i,j , modulated by its rule sentence score i : i,j = w (cid:62) s u i,j i , i,j = w (cid:62) e u i,j i (12) We extract the span with the highest span score under the restriction that the start and end positions must belong to the same rule sentence.",
"Let s and e be the ground truth start and end position of the span.",
"The underspecified span extraction loss is computed as the pointing loss L span,s = 1 l = inquire log softmax ( ) s (13) L span,e = 1 l = inquire log softmax ( ) e (14) The overall loss is the sum of the decision loss, entailment prediction loss and span extraction loss L = L dec + 1 L entail + 2 L span (15) where 1 and 2 are tunable hyperparameters.",
"Question Rephrasing.",
"The underspecified span extracted in the previous stage is fed into the question rephrasing model to generate a follow-up question.",
"We finetune the UniLM (Dong et al., 2019) to achieve this goal.",
"UniLM is a pretrained language model which demonstrates its effectiveness in both natural language understanding and generation tasks.",
"Specifically, it outperforms previous methods by a large margin on the SQuAD question generation task (Du and Cardie, 2018).",
"As shown in Figure 2, UniLM takes the concatenation of rule text and the extracted rule span as input, separated by the sentinel tokens: [CLS] ruletext [SEP] extracted-span [SEP] .",
"The training target is the follow-up question we want to generate.",
"Please refer to Dong et al. (2019) for details on finetuning UniLM and doing inference with it.",
"Dataset.",
"We conduct experiments on the ShARC CMR dataset (Saeidi et al., 2018).",
"It contains 948 dialog trees, which are flattened into 32,436 examples by considering all possible nodes in the trees.",
"Each example is a quintuple of (rule text, initial question, user scenario, dialog history, de-cision), where decision is either one of { Yes , No , Irrelevant } or a follow-up question.",
"The train, development, and test dataset sizes are 21890, 2270, and 8276, respectively.",
"1 End-to-End Evaluation.",
"Organizers of the ShARC competition evaluate model performance as an end-to-end task.",
"They first evaluate the micro-and macro-accuracy for the decision making task.",
"If both the ground truth decision and the predicted decision are Inquire , then they evaluate the generated follow-up question using BLEU score (Pap-ineni et al., 2002).",
"However, this way of evaluating follow-up questions has one issue.",
"If two models have different Inquire predictions, the follow-up questions for evaluation will be different, making the comparison unfair.",
"For example, a model could classify only one example as Inquire in the whole test set and generate the follow-up question correctly, achieving a 100% BLEU score.",
"Therefore, we also propose to evaluate the follow-up question generation performance in an oracle evaluation setup as described below.",
"Oracle Question Generation Evaluation.",
"In this evaluation, we ask the models to generate follow-up questions whenever the ground truth decision is Inquire , and compute the BLEU score for the generated questions accordingly.",
"In this setup, there are 6804 examples for training and 562 examples for evaluation.",
"Data Augmentation.",
"In the annotation process of the ShARC dataset, the scenario is manually constructed from a part of the dialog history, and that excerpt of the dialog is not shown as input to 1 Leaderboard: https://sharc-data.github.",
"the model.",
"Instead, it is treated as the evidence which should be entailed from the scenario.",
"To effectively utilize this additional signal, we construct more examples by replacing the scenario with the evidence .",
"This leads to additional 5800 training instances.",
"We use this augmented dataset for the EMT model and its ablations in our experiments.",
"Labeling Underspecified Spans.",
"To supervise the process of coarse-to-fine reasoning, we follow Zhong and Zettlemoyer (2019) to label the rule spans.",
"We first trim the follow-up questions in the conversation by removing question words do, does, did, is, was, are, have and the question mark ?.",
"For each trimmed question, we find the shortest span inside the rule text which has the minimum edit distance from the trimmed question, and treat it as an underspecified span .",
"Acquiring Labels for Entailment.",
"To supervise the subtask of entailment prediction for each rule sentence, we use a heuristic to automatically label its entailment state.",
"For each rule sentence, we first find if it contains any underspecified span for the questions in the dialog history (and evidence text), and use the corresponding Yes/No answers to label the rule text as Entailment / Contradiction .",
"The rule text without any underspecified span is labeled as Unknown .",
"Implementation Details.",
"We tokenize all text inputs with spaCy (Honnibal and Montani, 2017).",
"The EMT model and the follow-up question generation model UniLM are trained separately and pipelined together at test time.",
"For EMT, we use the uncased BERT base model (Wolf et al., 2019) for encoding.",
"We train EMT with Adam (Kingma and Ba, 2015) optimizer with a learning rate of 5e-5, a warm-up rate of 0.1 and a dropout rate of 0.35.",
"The loss weights 1 and 2 in Eq.",
"15 are set to 10 and 0.6 respectively, based on the development set results.",
"For UniLM, we fine-tuning it with a batch Models Yes No Inquire Irrelevant BERTQA 61.2 61.0 62.6 96.4 E 3 65.9 70.6 60.5 96.4 UrcaNet* 63.3 68.4 58.9 95.7 EMT 70.5 73.2 70.8 98.6 Table 2: Class-wise decision prediction accuracy on the development set (*: reported in the paper).",
"To reduce the variance of our experimental results, all experiments reported on the development set are repeated 5 times with different random seeds.",
"We report the average results along with their standard deviations.",
"End-to-End Task.",
"The end-to-end performance on the held-out test set is shown in Table 1. EMT outperforms the existing state-of-the-art model E 3 on decision classification in both microand macro-accuracy.",
"Although the BLEU scores are not directly comparable among different models, EMT achieves competitive BLEU1 and BLEU4 scores on the examples it makes an Inquire decision.",
"The results show that EMT has strong capability in both decision making and follow-up question generation tasks.",
"Table 2 presents the class-wise accuracy on the four decision types.",
"EMT improves on the Inquire decision significantly.",
"It is because EMT can explicitly track the states of all rule sentences; it has a macro accuracy of 80% on the End-to-End Task Oracle Question Generation Task Models Micro Acc.",
"Oracle Question Generation Task.",
"To establish a concrete question generation evaluation, we conduct experiments on our proposed oracle question generation task.",
"We compare our model EMT with E 3 and an extension E 3 +UniLM; implementations for other methods are not publicly available.",
"E 3 +UniLM replaces the editor of E 3 with our finetuned UniLM.",
"The results on the development set and 10-fold cross validation are shown in Table 3. Firstly, E 3 +UniLM performs better than E 3 , validating the effectiveness of our follow-up question rephrasing module: finetuned UniLM.",
"More importantly, EMT consistently outperforms E 3 and E 3 +UniLM on both the development set and the cross validation by a large margin.",
"Although there is no ground truth label for span extraction, we can infer from the question generation results that our coarse-to-fine reasoning approach extracts better spans than the extraction and retrieval modules of E 3 .",
"This is because E 3 propagates error from the span extraction module to the span retrieval module while our coarse-to-fine approach avoids this problem through weighting token-level span distributions with its sentence-level entailment scores.",
"We conduct an ablation study on the development set for both the end-to-end evaluation task and oracle question generation evaluation task.",
"We consider four ablations of our EMT model: (1) EMT (w/o data aug.) trains the model on the original ShARC training set and do not use any augmented data using the evidence.",
"(2) EMT (w/o c2f) extracts the rule span without weighted by the entailment score in Eqn.",
"12.",
"(3) EMT (w/o L entail ) removes the entailment state prediction subtask in decision making, and thus there is no entailment score for underspecified span extraction in Eqn.",
"12.",
"(4) EMT (w/o tracker) that removes the explicit memory tracking module.",
"Instead, it treats the [CLS] token for each rule sentence as the state for decision making and span extraction.",
"With the help of data augmentation, EMT boosts the performance slightly on the end-to-end task, especially for the question generation task which originally has only 6804 training examples.",
"The augmented training instances boosts the performance even though the augmentation method does not produce any new question.",
"This implies that the size of the ShARC dataset is a bottleneck for an effective end-to-end neural models.",
"Without the coarse-to-fine reasoning for span extraction, EMT (w/o c2f) drops by 1.53 on BLEU4, which implies that it is necessary for the question generation task.",
"The reason is that, as a classification task, entailment state prediction can be trained reasonably well (80% macro accuracy) with a limited amount of data (6804 training examples).",
"Therefore, the Unknown scores in the entailment state prediction can guide the span extraction via a soft modulation (Equation 12).",
"On the other hand, one-step span extraction method does not utilize the entailment states of the rule sentences from EMT, meaning it does not learn to extract the underspecified part of the rule text.",
"With the guidance of explicit entailment supervision, EMT outperforms EMT (w/o L entail ) by a large margin.",
"Intuitively, knowing the entailment states of the rule sentences makes the decision making process easier for complex tasks that require logical reasoning on conjunctions of conditions or disjunctions of conditions.",
"It also helps span extraction through the coarse-to-fine approach.",
"Without the explicit memory tracker described in Section 2.2, EMT (w/o tracker) performs poorly on the decision making task.",
"Although there exist interactions between rule sentences and user information in BERT-encoded representations through Initial Question: Do I qualify for SMP?",
"multi-head self-attentions, it is not adequate to learn whether conditions listed in the rule text have already been satisfied or not.",
"To get better insights into the underlying entailment-oriented reasoning process of EMT, we examine the entailment states of the rule sentences as the conversation flows.",
"Two example cases are provided in Figure 3. Given a rule text containing several rule sentences (S1, S2, S3, ...), we show the transition of predicted entailment states [ entailment , contradiction , unknown ] over multiple turns in the dialogue.",
"Rules in Bullet Points.",
"Figure 3",
"(a) shows an example in which the rule text is expressed in the conjunction of four bullet-point conditions.",
"On the first turn, EMT reads Scenario and Initial Ques-tion and they only imply that the question from the user is relevant to the rule text.",
"Thus the entailment states for all the rule sentences are Unknown , and EMT makes an Inquire decision, and asks a question.",
"Once a positive answer is received from the user part for the first turn, EMT transits the entailment state for rule sentence S3 from Unknown to Entailment , but it still cannot conclude the dialogue, so it asks a second follow-up question.",
"Then we see that the user response for the second question is negative, which makes EMT conclude a final decision No in the third turn.",
"Rules in Plain Text.",
"Figure 3",
"(b) presents a more challenging case where the rules are in plain text.",
"Therefore, it is not possible to put the whole sentence into a clarification question as EMT in Figure",
"3(a) does.",
"In this case, both the decision making module and span extraction module contribute to helping the user.",
"The span extraction module locates the correct spans inside S2, and EMT concludes a correct answer No after knowing the user does not fulfill the condition listed in S2.",
"Decision Making Error.",
"Out of 2270 examples in the development set, our EMT produces incorrect decisions on 608 cases.",
"We manually analyze 104 error cases.",
"In 40 of these cases, EMT fails to derive the correct entailment states for each rule sentence, while in 23 cases, the model predicts the correct entailment states but cannot predict correct decisions based on that.",
"These errors suggest that explicitly modeling the logic reasoning process is a promising direction.",
"Another challenge comes from extracting useful information from the user scenarios.",
"In 24 cases, the model fails to make the correct decision because it could not infer necessary user information from the scenarios.",
"Last but not least, parsing the rule text into rule sentences is also a challenge.",
"As shown in Figure",
"3(b), the plain text usually contains complicated clauses for rule conditions, which is difficult to disentangle them into separate conditions.",
"In 17 cases, one single rule sentence contains multiple conditions, which makes the model fail to conduct the entailment reasoning correctly.",
"Question Generation Error.",
"Out of 562 question generation examples in the development set, our EMT locates the underspecified span poorly in 115 cases (span extraction F1 score 0.5).",
"We manually analyze 52 wrong question generation cases.",
"Out of 29 cases of them, EMT fails to predict correct entailment states for rule sentences, and thus does not locate the span within the ground truth rule sentence, while in 9 cases, it finds the correct rule sentence but extracts a different span.",
"Another challenge comes from the one-to-many problem in sequence generation.",
"When there are multiple underspecified rule sentences, the model asks about one of these underspecified rule sentences which is different from the ground truth one.",
"This suggests that new evaluation metrics could be proposed by taking this into consideration.",
"ShARC Conversational Machine Reading (Saeidi et al., 2018) differs from conversational question answering (Choi et al., 2018; Reddy et al., 2019) and conversational question generation (Gao et al., 2019) in that 1) machines are required to formulate follow-up questions to fill the information gap, and 2) machines have to interpret a set of complex decision rules and make a question-related conclusion, instead of extracting the answer from the text.",
"CMR can be viewed as a special type of task-oriented dialog systems (Wen et al., 2017; Zhong et al., 2018; Wu et al., 2019) to help users achieve their goals.",
"However, it does not rely on predefined slot and ontology information but natural language rules.",
"On the ShARC CMR challenge (Saeidi et al., 2018), Lawrence et al. (2019) propose an end-to-end bidirectional sequence generation approach with mixed decision making and question generation stages.",
"Saeidi et al. (2018) split it into sub-tasks and combines hand-designed sub-models for decision classification, entailment and question generation.",
"Zhong and Zettlemoyer (2019) propose to extract all possible rule text spans, assign each of them an entailment score, and edit the span with the highest score into a follow-up question.",
"However, they do not use these entailment scores for decision making.",
"Sharma et al. (2019) study patterns of the dataset and include additional em-beddings from dialog history and user scenario as rule markers to help decision making.",
"Compared to these methods, our EMT has two key differences: (1) EMT makes decision via explicitly entailment-oriented reasoning, which, to our knowledge, is the first such approach; (2) Instead of treating decision making and follow-up question generation (or span extraction) separately, EMT is a unified approach that exploits its memory states for both decision making and question generation.",
"Memory-Augmented Neural Networks.",
"Our work is also related to memory-augmented neural networks (Graves et al., 2014, 2016), which have been applied in some NLP tasks such as question answering (Henaff et al., 2017) and machine translation (Wang et al., 2016).",
"For dialog applications, Zhang et al. (2019) propose a dialogue management model that employs a memory controller and a slot-value memory, Bordes et al. (2016) learn a restaurant bot by end-to-end memory networks, Madotto et al. (2018) incorporate external memory modules into dialog generation.",
"In this paper, we have proposed a new framework for conversational machine reading (CMR) that comprises a novel explicit memory tracker (EMT) to track entailment states of the rule sentences explicitly within its memory module.",
"The updated states are utilized for decision making and coarse-to-fine follow-up question generation in a unified manner.",
"EMT achieved a new state-of-the-art result on the ShARC CMR challenge.",
"EMT also gives interpretability by showing the entailment-oriented reasoning process as the conversation flows.",
"While we conducted experiments on the ShARC dataset, we believe the proposed methodology could be extended to other kinds of CMR tasks.",
"We thank Max Bartolo and Patrick Lewis for evaluating our submitted models on the hidden test set.",
"The work described in this paper was partially supported by following projects from the Research Grants Council of the Hong Kong Special Administrative Region, China: CUHK 2300174 (Collabo-rative Research Fund, No. C5026-18GF); CUHK 14210717 (RGC General Research Fund)."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other"
] |
[
"The end-to-end nature of neural machine translation (NMT) removes many ways of manually guiding the translation process that were available in older paradigms.",
"Recent work, however, has introduced a new capability: lexically constrained or guided decoding, a modification to beam search that forces the inclusion of pre-specified words and phrases in the output.",
"However, while theoretically sound, existing approaches have computational complexities that are either linear (Hokamp and Liu, 2017) or exponential (An-derson et al., 2017) in the number of constraints.",
"We present an algorithm for lexically constrained decoding with a complexity of O (1) in the number of constraints.",
"We demonstrate the algorithm's remarkable ability to properly place these constraints, and use it to explore the shaky relationship between model and BLEU scores.",
"Our implementation is available as part of SOCKEYE .",
"One appeal of the phrase-based statistical approach to machine translation (Koehn et al., 2003) was that it provided control over system output.",
"For example, it was relatively easy to incorporate domain-specific dictionaries, or to force a translation choice for certain words.",
"These kinds of interventions were useful in a range of settings, including interactive machine translation or domain adaptation.",
"In the new paradigm of neural machine translation (NMT), these kinds of manual interventions are much more difficult, and a lot of time has been spent investigating how to restore them (cf.",
"Arthur et al. (2016)).",
"At the same time, NMT has also provided new capabilities.",
"One interesting recent innovation is lexically constrained decoding , a modification to beam search that allows the user to specify words No one has the intention of building a wall.",
"errichten",
"Niemand hat die Absicht, eine Mauer zu bauen.",
"No one has the intention, a wall to build.",
"Niemand hat die Absicht, eine Mauer zu errichten .",
"No one has the intention, a wall to construct . Keiner Keiner \"errichten Keiner hat die Absicht, eine Mauer zu bauen.",
"No one has the intention, a wall to build.",
"Keiner hat die Absicht, eine Mauer zu errichten .",
"No one has the intention, a wall to construct .",
"and phrases that must appear in the system output (Figure 1).",
"Two algorithms have been proposed for this: grid beam search (Hokamp and Liu, 2017, GBS) and constrained beam search (Ander-son et al., 2017, CBS).",
"These papers showed that these algorithms do a good job automatically placing constraints and improving results in tasks such as simulated post-editing, domain adaptation, and caption generation.",
"A downside to these algorithms is their runtime complexity: linear (GBS) or exponential (CBS) in the number of constraints.",
"Neither paper reported decoding speeds, but the complexities alone suggest a large penalty in runtime.",
"Beyond this, other factors of these approaches (a variable sized beam, finite-state machinery) change the decoding procedure such that it is difficult to integrate with other operations known to increase throughput, like batch decoding.",
"We propose and evaluate a new algorithm, dynamic beam allocation (DBA), that is constant in the number of provided constraints (Table 1).",
"Our algorithm works by grouping together hypotheses that have met the same number of constraints into 1314 work complexity Anderson et al. (2017) O ( Nk 2 C ) Hokamp and Liu (2017) O ( NkC ) This work O ( Nk ) Table 1: Complexity of decoding (sentence length N , beam size k , and constraint count C ) with target-side constraints under various approaches.",
"banks (similar in spirit to the grouping of hypotheses into stacks for phrase-based decoding (Koehn et al., 2003)) and dynamically dividing a fixed-size beam across these banks at each time step.",
"As a result, the algorithm scales easily to large constraint sets that can be created when words and phrases are expanded, for example, by sub-word processing such as BPE (Sennrich et al., 2016).",
"We compare it to GBS and demonstrate empirically that it is significantly faster, making constrained decoding with an arbitrary number of constraints feasible with GPU-based inference.",
"We also use the algorithm to study beam search interactions between model and metric scores, beam size, and pruning.",
"Inference in statistical machine translation seeks to find the output sequence, y , that maximizes the probability of a function parameterized by a model, , and an input sequence, x :",
"The space of possible translations, Y , is the set of all sequences of words in the target language vocabulary, VT .",
"It is impossible to explore this entire space.",
"Models decompose this problem into a sequence of time steps, t .",
"At each time step, the model produces a distribution over VT .",
"The simplest approach to translation is therefore to run the steps of the decoder, choosing the most-probable token at each step, until either the end-of-sentence token, h / s i , is generated, or some maximum output length is reached.",
"An alternative, which explores a slightly larger portion of the search space, is beam search.",
"In beam search (Lowerre, 1976; Sutskever et al., 2014), the decoder maintains a beam of size k containing a set of active hypotheses (Algorithm 1).",
"At each time step t , the decoder model is used to produce a distribution over the target-language vocabulary, VT , for each of these hypotheses.",
"This produces a large matrix of dimensions k | VT | , Algorithm 1 Beam search.",
"Inputs: max output length N , beam size k .",
"that can be computed quickly with modern GPU hardware.",
"Conceptually, a (row, column) entry ( i, j ) in this matrix contains the state obtained from starting from the i th state in the beam and generating the target word corresponding to the j th word of VT .",
"The beam for the next time step is filled by taking the states corresponding to the k -best items from this entire matrix and sorting them.",
"A principal difference between beam search for phrase-based and neural MT is that in NMT, there is no recombination: each hypothesis represents a complete history, back to the first word generated.",
"This makes it easy to record properties of the history of each hypothesis that were not possible with dynamic programming.",
"Hokamp and Liu (2017) introduced an algorithm for forcing certain words to appear in the output called grid beam search (GBS).",
"This algorithm takes a set of constraints, which are words that must appear in the output, and ensures that hypotheses have met all these constraints before they can be considered to be completed.",
"For C constraints, this is accomplished by maintaining C + 1 separate beams or banks , B 0 , B 1 , . . . , BC , where B i groups together hypotheses that have generated (or met ) i of the constraints.",
"Decoding proceeds as with standard beam decoding, but with the addition of bookkeeping that tracks the number of constraints met by each hypothesis, and ensures that new candidates are generated, such that each bank is filled at each time step.",
"When beam search is complete, the hypothesis returned is the highest-scoring one in bank BC .",
"Conceptually, this can be thought of as adding an additional dimension to the beam, since we multiply out some base beam size b by (one plus) the number of constraints.",
"We note two problems with GBS:",
"Decoding complexity is linear in the number of constraints: The effective beam size, k ( C + 1) , varies with the number of constraints.",
"It is impractical.",
"The beam size changes for every sentence, whereas most decoders specify the beam size at model load time in order to optimize computation graphs, specially when running on GPUs.",
"It also complicates beam search optimizations that increase throughput, such as batching.",
"Our extension, fast lexically-constrained decoding via dynamic beam allocation (DBA), addresses both of these issues.",
"Instead of maintaining C + 1 beams, we maintain a single beam of size k , as with unconstrained decoding.",
"We then dynamically allocate the slots of this beam across the constraint banks at each time step.",
"There is still bookkeeping overhead, but this cost is constant in the number of constraints, instead of linear.",
"The result is a practical algorithm for incorporating arbitrary target-side constraints that fits within the standard beam-decoding paradigm.",
"Our algorithm (Algorithm 2) is based on a small but important alteration to GBS.",
"Instead of multiplying the beam by the number of constraints, we divide .",
"A fixed beam size is therefore provided to the decoder, just as in standard beam search.",
"As different sentences are processed with differing numbers of constraints, the beam is dynamically allocated to these different banks.",
"In fact, the allocation varies not just by sentence, but across time steps in processing each individual sentence.",
"We need to introduce some terminology.",
"A word constraint provided to the decoder is a single token in the target language vocabulary.",
"A phrasal constraint is a sequence of two or more contiguous tokens.",
"Phrasal constraints come into play when the user specifies a multi-word phrase directly (e.g., high-ranking member ), or when a word gets broken up by subword splitting (e.g., thou@@ ghtful ).",
"The total number of constraints is the sum of the number of tokens across all word and phrasal constraints.",
"It is easier for the decoder to place multiple sequential tokens in a phrasal constraint (where the permutation is fixed) compared to placing separate, independent constraints (see discussion at the end of 5), but the algorithm does not distinguish them when counting.",
"DBA fits nicely within standard beam decoding; we simply replace the kbest implementation from Algorithm 1 with one that involves a bit more bookkeeping.",
"Instead of selecting the topk items from the k VT scores matrix, the new algorithm must consider two important matters.",
"1. Generating a list of candidates ( 3.1).",
"Whereas the baseline beam search simply takes the topk items from the scores matrix (a fast operation on a GPU), we now need to ensure that candidates progress through the set of provided constraints.",
"2. Allocating the beam across the constraint banks ( 3.2).",
"With a fixed-sized beam and an arbitrary number of constraints, we need to find an allocation strategy for dividing the beam across the constraint banks.",
"We refer to Figure 2 for discussion of the algorithm.",
"The set of candidates for the beam at time step t + 1 is generated from the hypotheses in the current beam at step t , which are sorted in decreasing order, with the highest-scoring hypothesis at position 1. The DECODER-STEP function of beam search generates a matrix, scores , where each row r corresponds to a probability distribution over all target words, expanding the hypothesis in position r in the beam.",
"We build a set of candidates from the following items: 1. The best k tokens across all rows of scores (i.e., normal topk ); 2. for each hypothesis in the beam, all unmet constraints (to ensure progress through the constraints); and 3. for each hypothesis in the beam, the single-best token (to ensure consideration of partially-completed hypotheses).",
"Each of these candidates is denoted by its coordinates in scores .",
"The result is a set of candidates which can be grouped into banks according to how many constraints they have met, and then sorted within those banks.",
"The new beam for timestep t + 1 is then built from this list according to an allocation policy (next section).",
"t = [random.random() t = [b + x / sum(t) for x",
"For hypotheses partially through a phrasal constraint, special care must be taken.",
"If a phrasal constraint has been begun, but not finished, and a token is chosen that does not match the next word of the constraint, we must reset or unwind those tokens in this constraint that are marked as having been met.",
"This permits the decoder to abort the generation of a phrasal constraint, which is important in situations where a partial prefix of a phrasal constraint appears in the decoded sentence earlier than the entire phrase.",
"for x in range(10)]; in t];",
"print('\\t'.join(['{:.2f}'.format(x) for x in t])) Figure 2: A single step of the constrained decoder.",
"For example, space allocated at timestep 1 to a bank representing candidates having met more than one constraint cannot be used, and similarly, for later timesteps, it seems wasteful to allocate space to bank 1. Additionally, if the number of candidates in a bank is smaller than the allocation for that bank, the beam is in danger of being underfilled.",
"These problems are mitigated by bank adjustment (Figure 3).",
"We provide here only a sketch of this procedure.",
"An overfilled bank is one that has been allocated more slots than it has candidates to fill.",
"Each such overfilled bank, in turn, gives its extra allotments to banks that have more candidates than slots, looking first to its immediate neighbors, and moving outward until it has distributed all of its extra slots.",
"In this way, the beam is filled, up to the minimum of the beam size or the number of candidates.",
"The task is to allocate a sizek beam across C + 1 constraint banks, where C may be greater than k .",
"We use the term bank to denote the portion of the beam reserved for items having met the same number of constraints (including one bank for hypotheses with zero constraints met).",
"We use a simple allocation strategy, setting each bin size to b k/C c , irrespective of the timestep.",
"Any remaining slots are assigned to the topmost or maximally constrained bank, C .",
"This may at first appear wasteful.",
"Hypotheses are not allowed to generate the end-of-sentence token, h / s i , unless they have met all of their constraints.",
"When beam search is finished, the highest-scoring completed item is returned.",
"Our experiments were done using SOCKEYE (Hieber et al., 2017).",
"We used an EnglishGerman model trained on the complete WMT'17 training corpora (Bojar et al., 2017), which we pre-1317 Algorithm 2 k -best extraction with DBA.",
"The model was a 4 layer RNN with attention.",
"We trained using the Adam optimizer with a batch size of 80 until cross-entropy on the development data (newstest2016) stopped increasing for 10 consecutive iterations.",
"For decoding, we normalize completed hypotheses (those that have generated h / s i ), dividing the cumulative sentence score by the number of words.",
"Unless otherwise noted, we apply threshold pruning to the beam, removing hypotheses whose log probability is not within 20 compared to the best completed hypothesis.",
"This pruning is applied to all hypotheses, whether they are complete or not.",
"(We explore the importance of this pruning in 6.3).",
"Decoding stops when either all hypotheses still on the beam are completed or the maximum length, N , is reached.",
"All experiments were run on a single a Volta P100 GPU.",
"No ensembling or batching were used.",
"For experiments, we used the newstest2014 EnglishGerman test set (the developer version, with 2,737 sentences).",
"All BLEU scores are computed on detokenized output using SACREBLEU (Post, 2018), 1 and are thus directly comparable to scores reported in the WMT evaluations.",
"We center our exploration of DBA by experimenting with constraints randomly selected from the references.",
"We extract five sets of constraints: from one to four randomly selected words from the reference ( rand1 to rand4 ), and a randomly selected four-word phrase ( phr4 ).",
"We then apply BPE to these sets, which often yields a much larger number of token constraints.",
"Statistics about these extracted phrases can be found in Table 2. We simulate the GBS baseline within our framework.",
"After applying BPE, We group together translations with the same number of constraints, C , and then translate them as a group, with the beam set for that group set to b ( C + 1) , where b is the base beam parameter.",
"We use b = 10 as reported in Hokamp et al., but also try smaller values of b = 5 and 1. Finally, we disable beam adjustment ( 3.2), so that the space allocated to each constraint bank does not change.",
"Table 4 compares speeds and BLEU scores (in the legend) as a function of the number of post-1 The signature is BLEU+case.mixed+lang.ende+numrefs.1+smooth.exp+test.wmt14+tok.13a+v.1.2.6 1318 num rand1 rand2 rand3 rand4 phr4 1 2,182 0 0 0 0 2 548 3,430 0 0 0 3 516 1,488 4,074 0 0 4 272 1,128 2,316 4,492 4,388 5 150 765 1,860 3,275 2,890 6 30 306 1,218 2,520 2,646 7 42 133 805 1,736 1,967 8 0 112 488 1,096 1,280 9 0 36 171 702 720 10 0 10 140 400 430 11+ 0 22 189 417 575 total 3,726 7,477 11,205 14,885 14,926 mean 1.36 2.73 4.09 5.43 5.45 Table 2: Histogram of the number of token constraints for some constraint sets after applying BPE (model trained with 32k merge operations).",
"BPE constraints for the rand3 dataset.",
"We plot all points for which there were at least 10 sentences.",
"The times are decoding only, and exclude model loading and other setup.",
"The linear trend in C is clear for GBS, as is the constant trend for DBA.",
"In terms of absolute runtimes, DBA improves considerably over GBS, whose beam sizes quickly become quite large with a non-unit base beam size.",
"On the Tesla V100 GPU, DBA ( k = 10 ) takes about 0.6 seconds/sentence, regardless of the number of constraints.",
"2 This is about 3x slower than unconstrained decoding.",
"It is difficult to compare these algorithms exactly because of GBS's variable beam size.",
"An important comparison is that between DBA ( k = 10 ) and GBS/1 (base beam of 1).",
"A beam of k = 10 is a common setting for decoding in general, and GBS/1 has a beam size of k 10 for C 9 .",
"At this setting, DBA finds better translations (BLEU 26.7 vs. 25.6) with the same runtime and with a fixed, instead of variable-sized, beam.",
"We note that the bank adjustment correction of the DBA algorithm allows it to work when C > = k .",
"The DBA ( k = 5 ) plot demonstrates this, while still finding a way to increase the BLEU score over GBS (23.5 vs. 22.3).",
"However, while possible, low k relative to C reduces the observed improvement considerably.",
"Looking at Figure 5 across different constraint sets, we can get a better feel for this relationship.",
"DBA is still always able to meet the constraints even with a beam size of 5, 2 On a K80, it is about 1.4 seconds / sentence Volta decoding rand3 beam size = 10 (DBA), 5(C+1) (GBS) GBS/10 (BLEU 27.8, k=(C+1) 10) GBS/5 (BLEU 27.5, k=(C+1) 5) GBS/1 (BLEU 25.6, k=(C+1) 1) DBA (BLEU 27.2, k=20) DBA (BLEU 26.7, k=10) DBA (BLEU 23.5, k=5) unconstrained (k=10) counts 3 1.8101 1.0757 0.4725 1.1538 0.6817 0.4805 0.2133 1358 4 2.1669 1.2887 0.4923 1.1083 0.7240 0.5139 0.2133 579 5 2.5580 1.4420 0.5556 1.1052 0.6874 0.4012 0.2133 372 6 2.8908 1.6499 0.5844 1.0443 0.6858 0.4204 0.2133 203 7 3.4037 1.9266 0.7074 1.1524 0.7445 0.4466 0.2133 115 8 3.8006 2.0444 0.7642 1.1847 0.8012 0.4802 0.2133 61 9 3.9575 2.1997 0.8096 1.1856 0.8057 0.4603 0.2133 19 10 4.3779 2.4897 0.9564 1.1785 0.6904 0.5947 0.2133 14 11 4.7845 2.6689 1.1181 1.3284 0.8735 0.7886 0.2133 8 12 4.8664 2.9615 1.2622 1.5562 1.0878 0.8452 0.2133 5 13 3.0738 1.9880 1.0199 1.0908 0.8781 0.7931 0.2133 2 15 4.4652 2.7549 1.3539 1.4413 1.0156 0.9548 0.2133 1 s ec ond s / s e n t e n ce 0 1 2 3 4 5 number of constraints, C (after BPE) 3 4 5 6 7 8 9 10 GBS/10 (BLEU 27.8, k=(C+1) 10) GBS/5 (BLEU 27.5, k=(C+1) 5) GBS/1 (BLEU 25.6, k=(C+1) 1) DBA (BLEU 27.2, k=20) DBA (BLEU 26.7, k=10) DBA (BLEU 23.5, k=5) unconstrained (k=10) Volta decoding beam size = 10 (DBA), 5(C+1) (GBS)-1 # constraints phr2 phr3 phr4 rand1 rand2 rand3 0 1 0.6303 2 0.6407 0.6862 0.6560 3 0.6534 0.6404 0.6865 0.6584 0.6817 4 0.6493 0.6799 0.6439 0.7503 0.7267 0.7240 5 0.6441 0.6165 0.6141 0.7177 0.6706 0.6874 6 0.7285 0.7219 0.6195 0.7096 0.6858 7 0.7426 0.6765 0.6883 0.7445 8 0.7091 0.6819 0.8012 9 0.8327 10 0.5532 11 12 13 14 15 16 17 threshold 30 times 0 1 0.6303 2 0.6407 0.6862 0.6560 3 0.6534 0.6404 0.6865 0.6584 0.6817 4 0.6493 0.6799 0.6439 0.7503 0.7267 0.7240 5 0.6441 0.6165 0.6141 0.7177 0.6706 0.6874 6 0.7285 0.7219 0.6195 0.8798 0.7096 0.6858 7 0.7426 0.6765 0.6883 0.8994 0.7814 0.7445 8 1.1061 0.7091 0.6819 1.2610 0.8012 9 1.4579 0.8107 0.8327 0.8215 0.8057 10 0.7694 0.5532 1.0103 0.6904 11 0.7960 0.8048 0.6271 1.4163 0.8735 12 0.6904 1.0627 1.0878 13 0.7613 1.2263 0.8781 14 0.9635 0.7614 1.0156 15 0.7341 16 17 counts 0 0 1 0 2149 2 1679 317 1765 3 517 1355 171 453 1398 4 301 572 1100 53 284 554 5 135 400 600 33 136 356 6 57 208 467 9 62 209 7 36 104 254 5 20 113 8 7 57 152 6 61 9 2 25 78 5 25 10 5 42 5 14 11 3 2 19 1 4 12 4 8 1 13 2 2 1 14 1 4 1 15 2 16 17 # constraints 25.6 27.5 27.8 23.5 26.7 27.2 \u0000 1 Figure 4: Running time (seconds / sentence, lower is better) as a function of the number of constraints, C (after applying BPE) on the rand3 dataset.",
"but the quality suffers.",
"This should not be too surprising; correctly placing independent constraints is at least as hard as finding their correct permutation, which is exponential in the number of independent constraints.",
"But it is remarkable that the only failure to beat the baseline in terms of BLEU is when the algorithm is tasked with placing four random constraints (before BPE) with a beam size of 5.",
"In contrast, DBA never has any trouble placing phrasal constraints (dashed lines).",
"It's possible that the BLEU gains result from a boost in n-gram counts due to the mere presence of the reference constraints in the output, as opposed to their correct placement.",
"This appears not to be the case.",
"Experience examining the outputs shows its uncanny ability to sensibly place constrained words and phrases.",
"Figure 6 contains some examples from translating a German sentence into English, manually identifying interesting phrases in the target, choosing paraphrases of those words, and then decoding with them as constraints.",
"Note that the word weak, which doesn't fit in the semantics of the reference, is placed haphazardly.",
"We also confirm this correct placement quantitatively by comparing the location of the first word of each constraint in",
"(a) the reference and",
"(b) the output of the constrained decoder, represented as a percentage of the respective sentence lengths (Fig-ure 7).",
"We would not expect these numbers to 1319 Volta decoding rand3 beam size = 10 (DBA), 5(C+1) (GBS) 5 10 20 30 phr4 31.34 35.68 36.29 36.45 phr3 29.03 31.33 32.00 32.04 rand4 20.70 26.91 28.43 28.76 phr2 26.72 27.66 28.13 28.06 rand3 23.51 26.73 27.23 27.64 rand2 24.64 25.64 26.13 26.22 rand1 24.38 24.60 24.71 24.70 unconstrained 22.33 22.33 22.15 21.86 BLEU 15 20 25 30 35 40 beam size 5 10 20 30 phr4 phr3 rand4 phr2 rand3 rand2 rand1 unconstrained BLEU 15 20 25 30 35 40 beam size 5 10 20 30 phr4 phr3 rand4 phr2 rand3 rand2 rand1 unconstrained \u0000 1 Figure 5: BLEU score as a function of beam size under DBA.",
"be perfectly matched, but the strong correlation is pretty apparent (Pearson's r = 0 . 82 ).",
"Together, Figures 6 and 7 provide confidence that DBA is intelligently placing the constraints.",
"The inference procedure in SOCKEYE maximizes the length-normalized version of the sentence's log probability.",
"While there is no explicit training towards the metric, BLEU, modeling in machine translation assumes that better model scores correlate with better BLEU scores.",
"However, a general repeated observation from the NMT literature is the disconnect between model score and BLEU score.",
"For example, work has shown that opening up the beam to let the decoder find better hypotheses results in lower BLEU score (Koehn and Knowles, 2017), even as the model score rises.",
"The phenomenon is not well understood, but it seems that NMT models have learned to travel a path straight towards their goal; as soon as they get off this path, they get lost, and can no longer function (Ott et al., 2018).",
"Another way to look at this problem is to ask what the neural model thinks of the references.",
"Scoring against complete references is easy with NMT (Sennrich, 2017), but lexically-constrained decoding allows us to investigate this in finer-grained detail by including just portions of the references.",
"We observe that forcing the decoder to include even a single word from the reference imposes a cost in model score that is inversely 0 3 5 10 20 30 none 24.4 24.5 24.5 24.4 24.5 24.4 rand1 25.2 25.1 25.2 25.6 25.5 25.3 rand2 26.0 25.3 25.6 26.1 26.7 26.4 rand3 26.5 24.7 24.9 25.7 26.9 27.2 rand4 26.2 23.7 23.9 24.6 26.0 26.9 phr4 35.1 33.5 33.5 34.0 35.0 35.9 Table 3: BLEU scores decoding with a beam size of 10.",
"correlated with BLEU score, and that this grows with the number of constraints that are added (Fig-ure 8).",
"The NMT system seems quite averse to the references, even in small pieces, and even while it improves the BLEU score.",
"At the same time, the hypotheses it finds in this reduced space are still good, and become better as the beam is en-largened (Figure 5).",
"This provides a complementary finding to that of Koehn and Knowles (2017): in that setting, higher model scores found by a larger beam produce lower BLEU scores; here, lower model scores are associated with significantly higher BLEU scores.",
"In the results reported above, we used a pruning threshold of 20, meaning that any hypothesis whose log probability is not within 20 of the best completed hypothesis is removed from the beam.",
"This pruning threshold is far greater than those explored in other papers; for example, Wu et al. (2016) use 3. However, we observed two things: first, without pruning, running time for constrained decoding is nearly doubled.",
"This increased runtime applies to both DBA and GBS in Figure 4. Second, low pruning thresholds are harmful to BLEU scores (Table 3).",
"It is only once the thresholds reach 20 that the algorithm is able to find better BLEU scores compared to the unpruned baseline (column 0).",
"Why is the algorithm so slow without pruning?",
"One might suspect that the outputs are longer, but mean output length with all constraint sets is roughly the same.",
"The reason turns out to be that the the decoder never quits before the maximum 1320 constraint score output source Einer soll ein hochrangiges Mitglied aus Berlin gewesen sein .",
"timestep, N .",
"SOCKEYE 's stopping criterium is to wait until all hypotheses on the beam are finished.",
"Without pruning, the decoder generates a finished hypotheses, but continues on until the maximum timestep N , populating the rest of the beam with low-cost garbage.",
"An example can be found in Figure 9.",
"This may be an example of the well-attested phenomenon where NMT systems become unhinged from the source sentence, switching into language model mode and generating high-probable output with no end.",
"But strangely, this doesn't seem to affect the best hypotheses, but only the rest of the beam.",
"\u0000 1",
"the decoder, having been forced into a place it doesn't like, does not know how to generate good competing hypotheses.",
"An alternative to pruning is early stopping , which is to stop when the first complete hypothesis is generated.",
"In our experiments, while this did fix the problem of increasing runtimes, the BLEU scores were lower.",
"By setting a large pruning threshold, we produced large speedups over GBS, and demonstrated a constant overhead in the number of constraints.",
"Compared to GBS, our DBA algorithm makes lexically constrained decoding possible, requiring less than half a second on average on a Volta GPU with a 4-layer RNN.",
"Hokamp and Liu (2017) was novel in that it allowed the specification of arbitrary target-side words as hard constraints, implemented entirely as a restructuring of beam search, and without reference to the source.",
"A related approach was that of Anderson et al. (2017), who extended beam search with a finite state machine whose states marked completed subsets of the set of constraints, at an exponential cost in the number of constraints.",
"Lexically-constrained decoding also generalizes prefix decoding (Knowles and Koehn, 2016; Wuebker et al., 2016), since the h s i symbol can easily be included as the first word of a constraint.",
"Our work here has not explored where to get lexical constraints, but considering that question naturally brings to mind attempts to improve NMT by using lexicons and phrase tables (Arthur et al., 2016; Tang et al., 2016).",
"Finally, another approach which shares the hard-decision made by lexically constrained decoding is the placeholder approach (Crego et al., 2016), wherein identifiable elements in the input are transformed to masks during preprocessing, and then replaced with their original source-language strings during postprocessing.",
"Neural machine translation removes many of the knobs from phrase-based MT that provided fine-grained control over system output.",
"Lexically-constrained decoding restores one of these tools, providing a powerful and interesting way to in-fluence NMT output.",
"It requires only the specification of the target-side constraints; without any source word or alignment information, it correctly places the constraints.",
"Although we have only tested it here with RNNs, the code works without modification with other architectures generate target-side words one-by-one, such as the Transformer (Vaswani et al., 2017).",
"This paper has introduced a fast and practical solution.",
"Building on previous approaches, constrained decoding with DBA does away with linear and exponential complexity (in the number of constraints), imposing only a constant overhead.",
"On a Volta GPU, lexically-constrained decoding with DBA is practical, requiring about 0.6 seconds per sentence on average even with 10+ constraints, well within the realm of feasibility even for applications with strict lattency requirements, like post-editing tasks.",
"We imagine that there are further optimizations in reach that could improve this even further."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"In the context of fake news, bias, and propaganda, we study two important but relatively under-explored problems: ( i ) trustworthiness estimation (on a 3-point scale) and ( ii ) political ideology detection (left/right bias on a 7-point scale) of entire news outlets, as opposed to evaluating individual articles.",
"In particular, we propose a multi-task ordinal regression framework that models the two problems jointly.",
"This is motivated by the observation that hyper-partisanship is often linked to low trustworthiness, e.g., appealing to emotions rather than sticking to the facts, while center media tend to be generally more impartial and trustworthy.",
"We further use several auxiliary tasks, modeling centrality, hyper-partisanship, as well as",
"left-vs.-right bias on a coarse-grained scale.",
"The evaluation results show sizable performance gains by the joint models over models that target the problems in isolation.",
"Recent years have seen the rise of social media, which has enabled people to virtually share information with a large number of users without regulation or quality control.",
"On the bright side, this has given an opportunity for anyone to become a content creator, and has also enabled a much faster information dissemination.",
"However, it has also opened the door for malicious users to spread disinformation and misinformation much faster, enabling them to easily reach audience at a scale that was never possible before.",
"In some cases, this involved building sophisticated profiles for individuals based on a combination of psychological characteristics, meta-data, demographics, and location, and then micro-targeting them with personalized fake news with the aim of achieving some political or financial gains (Lazer et al., 2018; Vosoughi et al., 2018).",
"A number of fact-checking initiatives have been launched so far, both manual and automatic, but the whole enterprise remains in a state of crisis: by the time a claim is finally fact-checked, it could have reached millions of users, and the harm caused could hardly be undone.",
"An arguably more promising direction is to focus on fact-checking entire news outlets, which can be done in advance.",
"Then, we could fact-check the news before they were even written: by checking how trustworthy the outlets that published them are.",
"Knowing the reliability of a medium is important not only when fact-checking a claim (Popat et al., 2017; Nguyen et al., 2018), but also when solving article-level tasks such as fake news and click-bait detection (Brill, 2001; Finberg et al., 2002; Hardalov et al., 2016; Karadzhov et al., 2017; De Sarkar et al., 2018; Pan et al., 2018; Perez-Rosas et al., 2018) Political ideology (or left/right bias) is a related characteristic, e.g., extreme left/right media tend to be propagandistic, while center media are more factual, and thus generally more trustworthy.",
"This connection can be clearly seen in Figure 1.",
"Despite the connection between factuality and bias, previous research has addressed them as independent tasks, even when the underlying dataset had annotations for both (Baly et al., 2018).",
"In contrast, here we solve them jointly.",
"Our contributions can be summarized as follows: We study an under-explored but arguably important problem: predicting the factuality of reporting of news media.",
"Moreover, unlike previous work, we do this jointly with the task of predicting political bias.",
"As factuality and bias are naturally defined on an ordinal scale (factuality: from low to high , and bias: from extreme-left to extreme-right ), we address them as ordinal regression.",
"Using multi-task ordinal regression is novel for these tasks, and it is also an under-explored direction in machine learning in general.",
"We design a variety of auxiliary subtasks from the bias labels: modeling centrality, hyper-partisanship, as well as",
"left-vs.-right bias on a coarse-grained scale.",
"Factuality of Reporting Previous work has modeled the factuality of reporting at the medium level by checking the general stance of the target medium with respect to known manually fact-checked claims, without access to gold labels about the overall medium-level factuality of reporting (Mukherjee and Weikum, 2015; Popat et al., 2016, 2017, 2018).",
"The trustworthiness of Web sources has also been studied from a Data Analytics perspective, e.g., Dong et al. (2015) proposed that a trustworthy source is one that contains very few false claims.",
"In social media, there has been research targeting the user, e.g., finding malicious users (Mihaylov and Nakov, 2016; Mihaylova et al., 2018; Mihaylov et al., 2018), sockpuppets (Maity et al., 2017), Internet water army (Chen et al., 2013), and seminar users (Darwish et al., 2017).",
"Unlike the above work, here we study source reliability as a task in its own right, using manual gold annotations specific for the task and assigned by independent fact-checking journalists.",
"Moreover, we address the problem as one of ordinal regression on a three-point scale, and we solve it jointly with political ideology prediction in a multi-task learning setup, using several auxiliary tasks.",
"Predicting Political Ideology In previous work, political ideology, also known as media bias, was used as a feature for fake news detection (Horne et al., 2018a).",
"It has also been the target of classification, e.g., Horne et al. (2018b) predicted whether an article is biased ( political or bias ) vs. unbiased.",
"Similarly, Potthast et al. (2018) classi-fied the bias in a target article as ( i ) left vs. right vs. mainstream, or as ( ii ) hyper-partisan vs. mainstream.",
"Left-vs-right bias classification at the article level was also explored by Kulkarni et al. (2018), who modeled both the textual and the URL contents of the target article.",
"There has been also work targeting bias at the phrase or the sentence level (Iyyer et al., 2014), focusing on political speeches (Sim et al., 2013) or legislative documents (Gerrish and Blei, 2011), or targeting users in Twitter (Preotiuc-Pietro et al., 2017).",
"Another line of related work focuses on propaganda, which can be seen as a form of extreme bias (Rashkin et al., 2017; Barron-Cedeno et al., 2019a,b).",
"See also a recent position paper (Pitoura et al., 2018) and an overview paper on bias on the Web (Baeza-Yates, 2018).",
"Unlike the above work, here we focus on predicting the political ideology of news media outlets.",
"In our previous work (Baly et al., 2018), we did target the political bias of entire news outlets, as opposed to working at the article level (we also modeled factuality of reporting, but as a separate task without trying multi-task learning).",
"In addition to the text of the articles published by the target news medium, we used features extracted from its corresponding Wikipedia page and Twitter pro-file, as well as analysis of its URL structure and traffic information about it from Alexa rank.",
"In the present work, we use a similar set of features, but we treat the problem as one of ordinal regression.",
"Moreover, we model the political ideology and the factuality of reporting jointly in a multitask learning setup, using several auxiliary tasks.",
"Multitask Ordinal Regression Ordinal regression is well-studied and is commonly used for text classification on an ordinal scale, e.g., for sentiment analysis on a 5-point scale (He et al., 2016; Rosenthal et al., 2017a).",
"However, multi-task ordinal regression remains an understudied problem.",
"Yu et al. (2006) proposed a Bayesian framework for collaborative ordinal regression, and demonstrated that modeling multiple ordinal regression tasks outperforms single-task models.",
"Walecki et al. (2016) were interested in jointly predicting facial action units and their intensity level.",
"They argued that, due to the high number of classes, modeling these tasks independently would be inefficient.",
"Thus, they proposed the copula ordinal regression model for multi-task learning and demonstrated that it can outperform various single-task setups.",
"We use this model in our experiments below.",
"Balikas et al. (2017) used multi-task ordinal regression for the task of fine-grained sentiment analysis.",
"In particular, they introduced an auxiliary coarse-grained task on a 3-point scale, and demonstrated that it can improve the results for sentiment analysis on the original 5-point scale.",
"Inspired by this, below we experiment with different granularity for political bias; however, we explore a larger space of possible auxiliary tasks.",
"Copula Ordinal Regression We use the Copula Ordinal Regression (COR) model, which was originally proposed by Walecki et al. (2016) to estimate the intensities of facial action units (AUs).",
"The model uses copula functions and conditional random fields (CRFs) to approximates the learning of the joint probability distribution function (PDF) of the facial AUs (random variables), using the bivariate joint distributions capturing dependencies between AU pairs.",
"It was motivated by the fact that ( i ) many facial AUs co-exist with different levels of intensity, ( ii ) some AUs co-occur more often than others, and ( iii ) some AUs depend on the intensity of other units.",
"We can draw an analogy between modeling facial AUs and modeling news media, where each medium expresses a particular bias (political ideology) and can also be associated with a particular level of factuality.",
"Therefore, bias and factuality can be analogous to the facial AUs in (Walecki et al., 2016), and represent two aspects of news reporting, each being modeled on a multi-point ordinal scale.",
"In particular, we model bias on a 7-point scale ( extreme-left , left , center-left , center , center-right , right , and extreme-right ), and factuality on a 3-point scale ( low , mixed , and high ).",
"In our case, we train the COR model to predict the joint PDF between political bias and factuality of reporting.",
"This could potentially work well given the inherent inter-dependency between the two tasks as we have seen on Figure 1.",
"Auxiliary Tasks We use a variety of auxiliary tasks, derived from the bias labels.",
"This includes converting the 7-point scale to ( i ) 5-point and 3-point scales, similarly to (Balikas et al., 2017), and to ( ii ) a 2-point scale in two ways to model extreme partisanship, and centrality.",
"Here is the list of the auxiliary tasks we use with precise defini-tion of the label mappings: Bias5-way: Predict bias on a 5-pt scale; 1: extreme-left , 2: left , 3: { center-left, center, center-right } , 4: right , and 5: extreme-right .",
"Bias3-way: Predict bias on a 3-pt scale; 1: { extreme-left, left } , 2: { center-left, center, center-right } , and 3: { right, extreme-right } .",
"Bias-extreme: Predict extreme vs. nonextreme partisanship on a 2-pt scale; 1: { extreme-left, extreme-right } , 2: { left, center-left, center, center-right, right } .",
"Bias-center: Predict center vs. non-center political ideology on a 2-pt scale, ignoring polarity: 1: { extreme-left, left, right, extreme-right } , 2: { center-left, center, center-right } .",
"Features We used the features from (Baly et al., 2018) 1 .",
"We gathered a sample of articles from the target medium, and we calculated features such as POS tags, linguistic cues, sentiment scores, complexity, morality, as well as embeddings.",
"We also used the Wikipedia page of the medium (if any) to generate document embedding.",
"Then, we collected metadata from the medium's Twitter account (if any), e.g., whether is is verified, number of followers, whether the URL in the Twitter page matches the one of the medium.",
"Finally, we added Web-based features that ( i ) model the orthographic structure of the medium's URL address, and ( ii ) analyze the Web-traffic information about the medium's website, as found in Alexa rank.",
"2 4 Experiments and Evaluation Data We used the MBFC dataset (Baly et al., 2018) that has 1,066 news media manually annotated for factuality (3-pt scale: high , mixed , low ) and political bias (7-pt scale: from extreme-left to extreme-right ).",
"This dataset was annotated by volunteers using a detailed methodology 3 that is designed to guarantee annotation objectivity.",
"1 https://github.com/ramybaly/ News-Media-Reliability 2 https://www.alexa.com/siteinfo 3 For details, see https://mediabiasfactcheck.",
"Furthermore, readers can provide their own feedback on existing annotations, and in case of a large discrepancy, annotation is adjusted after a thor-ough review.",
"Therefore, we believe the annotation quality is good enough to experiment with.",
"We noticed that 117 media had low factuality because they publish satire and pseudo-science , neither of which has a political perspective.",
"Since we are interested in modeling the relation between factuality and bias, we excluded those websites, thus ending up with 949 news media.",
"Some examples from this dataset are shown in Table 1 with both factuality and bias labels, in addition to their corresponding Twitter handles and Wikipedia pages.",
"Overall, 64% of the media in our dataset have Wikipedia pages, and 65% have Twitter accounts.",
"Table 2 further provides detailed statistics about the label distribution in the MBFC dataset.",
"Experimental Setup We used the implementation 4 of the Copula Ordinal Regression (COR) model as described in (Walecki et al., 2016).",
"In our experiments, we used 5-fold cross-validation, where for each fold we split the training dataset into a training part and a validation part, and we used the latter to fine-tune the model's hyper-parameters, optimizing for Mean Absolute Error (MAE).",
"MAE is an appropriate evaluation measure given the ordinal nature of the tasks.",
"These hyper-parameters include the copula function ( Gumbel vs. Frank ), the marginal distribution ( normal vs. sigmoid ), the number of training iterations, the optimizer ( gradient descent , BFGS ), and the connection density of the CRFs.",
"We report both MAE and MAEM , which is a variant of MAE that is more robust to class imbalance.",
"See (Bac-cianella et al., 2009; Rosenthal et al., 2017b) for more details about MAEM vs. MAE.",
"We compare the results to two baselines: ( i ) majority class, and ( ii ) single-task ordinal regression.",
"Results and Discussion Table 3 shows the evaluation results for the COR model when trained to jointly model the main task ( shown in the columns ) using combinations of auxiliary tasks ( shown in the rows ).",
"We can see that the single-task ordinal regression model performs much better than the majority class baseline based on both evaluation measures.",
"We can further see that the performance on the main task improves when jointly modeling several auxiliary tasks.",
"This improvement depends on the auxiliary tasks in use.",
"For factuality prediction, it turns out that the combination of bias-center + bias-extreme yields the best overall MAE of 0.481.",
"This makes sense and aligns well with the intuition that knowing whether a medium is centric or hyper-partisan is important to predict the factuality of its reporting.",
"For instance, a news medium without a political ideology tends to be more trustworthy compared to an extremely biased one, regardless of their polarity (left or right), as we should expect based on the data distribution shown in Figure 1 above.",
"For bias prediction (at a 7-point left-to-right scale), a joint model that uses political bias at different levels of granularity (5-point and 3-point) as auxiliary tasks yields the best overall MAE of 1.479.",
"This means that jointly modeling bias with the same information at coarser levels of granularity, i.e., adding 3-point and 5-point as auxiliary tasks, reduces the number of gross mistakes.",
"E.g., predicting extreme-left instead of extreme-right , since the model is encouraged by the auxiliary tasks to learn the correct polarity, regardless of its intensity.",
"We can see that factuality is not very useful as an auxiliary task by itself (MAE=1.584 and MAEM =1.695).",
"In other words, a medium with low factuality could be extremely biased to either the right or to the left.",
"Therefore, relying on factuality alone to predict bias might introduce severe errors, e.g., confusing extreme-left with extreme-right, thus leading to higher MAE scores.",
"This can be remedied by adding factuality to the mix of other auxiliary tasks to model the main task (7-point bias prediction).",
"The results of these experiments, shown in parentheses in Table 3, indicate that adding factuality to any combination of auxiliary tasks consistently yields lower MAE scores.",
"In particular, modeling the combination of factuality + bias5-way + bias3-way yields the best results (MAE=1.475 and MAEM =1.623).",
"This result indicates that factuality provides complementary information that can help predict bias.",
"We ran a two-tailed t-test for statistical significance, which is suitable for an evaluation measure such as MAE, to confirm the improvements that were introduced by the multi-task setup.",
"We found that the best models (shown in bold in Table 3) outperformed both the corresponding majority class baselines with a p-value 0.001, and the corresponding single-task ordinal regression baselines with a p-value 0.02.",
"Finally, we compared the above results to our previous work (Baly et al., 2018) by independently training a Support Vector Machine (SVM) classi-fier for each task, using the same features.",
"The resulting MAE was 0.450 for factuality and 1.184 for bias prediction, which is slightly better then our results (yet, very comparable for factual-ity).",
"However, our goal here is to emphasize the advantages of modeling the two tasks jointly.",
"We have presented a multi-task ordinal regression framework for jointly predicting trustworthiness and political ideology of news media sources, using several auxiliary tasks, e.g., based on a coarser-grained scales or modeling extreme partisanship.",
"Overall, we have observed sizable performance gains in terms of reduced MAE by the multi-task ordinal regression models over single-task models for each of the two individual tasks.",
"In future work, we want to try more auxiliary tasks, and to experiment with other languages.",
"We further plan to go beyond left vs. right , which is not universal and can exhibit regional specificity (Tavits and Letki, 2009), and to model other kinds of biases, e.g., eurosceptic vs. europhile , nationalist vs. globalist , islamist vs. secular , etc.",
"This research is part of the Tanbih project, 5 which aims to limit the effect of fake news, propaganda and media bias by making users aware of what they are reading.",
"The project is developed in collaboration between the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Computing Research Institute (QCRI), HBKU."
] | [
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"objective",
"method",
"objective",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"result",
"method",
"abstain",
"other",
"other"
] |
[
"Multi-task learning (MTL) and transfer learning (TL) are techniques to overcome the issue of data scarcity when training state-of-the-art neural networks.",
"However, finding beneficial auxiliary datasets for MTL or TL is a timeand resource-consuming trial-and-error approach.",
"We propose new methods to automatically assess the similarity of sequence tagging datasets to identify beneficial auxiliary data for MTL or TL setups.",
"Our methods can compute the similarity between any two sequence tagging datasets, i.e. they do not need to be annotated with the same tagset or multiple labels in parallel.",
"Additionally, our methods take tokens and their labels into account, which is more robust than only using either of them as an information source, as conducted in prior work.",
"We empirically show that our similarity measures correlate with the change in test score of neural networks that use the auxiliary dataset for MTL to increase the main task performance.",
"We provide an efficient, open-source implementation.",
"1 1 Introduction State-of-the-art neural networks usually require large amounts of training data and vast computational resources.",
"Especially for low-resource tasks, data scarcity is the main issue hampering the training of robust models.",
"By leveraging multitask learning or transfer learning, auxiliary data can be incorporated into the training to boost the main task performance.",
"Finding suitable auxiliary datasets for these cases is a timeand resource-consuming trial-and-error approach, because there can be plenty of plausible auxiliary datasets that could help to learn the main task.",
"For a proper evaluation of different auxiliary datasets, hyperparameter search and training runs with multiple random seeds have to be performed for each auxiliary 1 github.com/uhh-lt/seq-tag-sim dataset individually.",
"Thus, the process takes even longer and uses even more computational resources.",
"We propose methods to shorten this trial-and-error approach by computing the similarity between any two sequence tagging datasets.",
"Based on the similarity, suitable datasets can be quickly selected to be used as auxiliary training data for multi-task or transfer learning.",
"Our contributions are a family of novel methods to compute the similarity of sequence tagging datasets, where the similarity values correlate with the change in multi-task learning performance when using one dataset as auxiliary data for training the other.",
"We evaluate our methods in experiments with five part-of-speech (POS) tagging, nine named-entity recognition (NER) and three argumentation mining (AM) datasets.",
"Our similarity measures allow for comparison both datasets for the same and different tasks, not requiring the same set of labels on target and auxiliary dataset.",
"The calculated similarity scores can be used to predict which dataset will be beneficial as auxiliary training data for multi-task training in order to shorten the search process.",
"Multi-task learning (MTL) is a technique to learn multiple tasks jointly (Caruana, 1997).",
"Depending on the setting, either all tasks are equally important, or only the performance on the main task is of interest, which shall be improved with additional training data.",
"MTL has been successfully applied in natural language processing for various sequence tagging tasks (Sgaard and Goldberg, 2016; Bjerva et al., 2016; Plank et al., 2016; Martnez Alonso and Plank, 2017; Kaiser et al., 2017; Bingel and Sgaard, 2017; Augenstein and Sgaard, 2017; Kim et al., 2017; Yang et al., 2017; Changpinyo et al., 2018; Liu et al., 2018; Schulz et al., 2018).",
"These approaches use hard parameter sharing in the hidden layers of neural learning architectures, where the same weights are updated from several tasks.",
"The majority of works combined a main task with a single, supervised auxiliary task.",
"In transfer learning, a model is pre-trained on an auxiliary dataset to increase the main task performance.",
"Howard and Ruder (2018) showed knowledge transfer based on large-scale language modeling.",
"Before the breakthrough with BERT (Devlin et al., 2019), only partial knowledge transfer via word embeddings such as word2vec (Mikolov et al., 2013) or ELMo (Ilic et al., 2018) was utilized.",
"In theory, auxiliary tasks can have various relationships to the main task (Ruder, 2017).",
"In practice, the most common choice is to use a somehow related task.",
"Caruana (1997) argues that tasks are similar if the same features are used for making predictions.",
"Baxter (2000) suggests similar tasks should have the same inductive bias.",
"Ben-David and Schuller (2003) indicate that tasks originating from the same probability distribution are similar and perform well in an MTL setting.",
"No universal measure for task similarity exists, but it is needed to select tasks to prefer for training (Ruder, 2017).",
"Although MTL is frequently applied in recent work, few elaborate on the effect of task and dataset similarity.",
"Recent work on neural MTL found different hints regarding task similarity that are only applicable to a specific scenario.",
"Kim et al. (2017) performed MTL on POS tagging across 14 languages and found that language similarity seems to correlate with MTL performance.",
"Yang et al. (2017) worked on common tasks with artifi-cially reduced datasets.",
"They attribute the degree of performance increase to label abundance for the main task, dataset similarity and number of shared parameters.",
"Changpinyo et al. (2018) compared eleven tasks and observed that some tasks increase the performance in most cases, while tasks with a small tagset decreased the main task performance.",
"In contrast, Martnez Alonso and Plank (2017) show results that auxiliary tasks with few labels and a uniform label distribution perform better for MTL in neural sequence tagging: Auxiliary tasks having many labels or high entropy harm the main task performance.",
"While Ruder et al. (2019) confirm these findings, Bjerva (2017) found no evidence of label entropy correlating with MTL performance.",
"Martnez Alonso and Plank (2017) found a difference between two POS datasets when used as auxiliary data because converting one to another tagset changes the effect of MTL significantly.",
"Kim et al. (2015) propose a method using label embeddings to map labels from auxiliary datasets to the target tagset so that MTL can be treated as single-task learning (STL) with an increased amount of training data.",
"Bingel and Sgaard (2017) predict MTL performance from dataset and STL learning features and found the learning curve to be much more important.",
"From the dataset features, the number of labels on the main task and the auxiliary label entropy showed predictive potential.",
"Most similar to our approach is the work of Bjerva (2017), who estimates the effect of an auxiliary task in MTL with information-theoretic measures.",
"As the method requires the same datasets to be tagged with multiple tasks in parallel, at least one task must be automatically taggable with almost perfect results.",
"He shows a correlation of conditional entropy and mutual information with a change in accuracy compared to STL.",
"Results on the semantic task of Bjerva et al. (2016); Martnez Alonso and Plank (2017) indicate that mutual information for helpful auxiliary tasks is higher than for harmful tasks.",
"Augenstein et al. (2018) propose an architecture that learns label embeddings for natural language classification tasks and find that label embeddings indicate gains or harms of MTL.",
"Ruder et al. (2019) correlate task properties with performance differences and learned meta-network parameters of their proposed sluice networks.",
"They find that MTL gains are higher for smaller training datasets and that sluice networks learn to share more in case of higher variance in the training data.",
"Opposed to previous approaches, our methods can compare same-task datasets and are not restricted to datasets with parallel labels.",
"As our experiments in Section 5 require these properties, previous approaches are not applicable and thus not comparable.",
"Next, we will introduce information-theoretic measures that build the foundation for our dataset similarity measures proposed in Section 4.",
"Entropy is a measure of the uncertainty of a random variable.",
"The entropy H ( X ) of a discrete random variable X with alphabet X is defined as H ( X ) = (cid:88) x X p ( x ) log 2 p ( x ) (1) where p ( x ) is the probability mass function p ( x ) = Pr { X = x } , x X .",
"It is 0 when p = 0 or 1 and maximal when p = 1 |X| (uniform distribution) with an upper bound of H ( X ) log 2 | X | .",
"Joint entropy H ( X, Y ) extends entropy from a single to two random variables.",
"For a pair of discrete random variables ( X, Y ) with a joint probability distribution p ( x, y ) , it is defined as H ( X, Y ) = (cid:88) x X (cid:88) y Y p ( x, y ) log 2 p ( x, y ) .",
"Mutual information (MI) I ( X ; Y ) describes the amount of information one random variable X contains about another Y .",
"It is a symmetric measure of range [0 , min { H ( X ) , H ( Y ) } ] defined as I ( X ; Y ) = (cid:88) x X (cid:88) y Y p ( x, y ) log 2 p ( x, y ) p ( x ) p ( y ) (3) with probability mass functions p ( x ) , p ( y ) and a joint probability mass function p ( x, y ) .",
"For a detailed description of entropy, mutual information and information theory in general, please refer to Cover and Thomas (2006).",
"A clustering C is a way to partition a dataset D into non-overlapping subsets { c 1 , c 2 , . . . } together containing all N items of D .",
"Comparing clusterings requires a measure to determine the quality of a clustering according to another clustering, e.g. the ground truth.",
"Such a measure should quantify the amount of information shared between both clusterings.",
"(Vinh et al., 2010)",
"Information-theoretic clustering comparison measures are based on a solid mathematical foundation from information theory and can work with non-linear similarities.",
"They have become popular by the works of Strehl and Ghosh (2003) and Meila (2005).",
"Mutual information measures the information shared between two clusterings C and C (cid:48) .",
"A higher MI signals a greater help in predicting the cluster labels in C with information from C (cid:48) .",
"Several normalized mutual information variants can be derived: NMI joint = I ( C ; C (cid:48) ) H ( C, C (cid:48) ) (4) NMI max = I ( C ; C (cid:48) ) max( H ( C ) , H ( C (cid:48) )) (5) Analogously to NMI max , there are NMI sum , NMI sqrt and NMI min that use entropy sums, square root of the entropy products or minimum of both entropy values as a normalization factor (Kvalseth, 1987; Strehl and Ghosh, 2003; Yao, 2003; Liu et al., 2008).",
"They are all bounded in [0 , 1] , equaling 0 when two clusterings share no information at all, i.e. are fully independent and 1 when two clusterings are identical.",
"According to Vinh et al. (2010), NMI max and NMI joint satisfy the highest number of theoretical properties desirable among the clustering comparison measures.",
"They prove that only the unit complements of both measures satisfy the metric property ( positive definiteness , symmetry and tri-angle inequality ).",
"While all measures satisfy the normalization property , none conform to the con-stant baseline property unless the number of items N is large, compared to the number of clusters.",
"The high-level idea of our dataset similarity measures is the following: Words and labels from one dataset are correlated with the words and their labels from another dataset to create a probabilistic mapping between both label sets.",
"Either an exact string matching or a fuzzy matching based on word embedding representations can be used.",
"The dataset similarity is measured via the quality of this label mapping.",
"Transforming the problem of token-label dataset similarity to a clustering comparison problem allows reusing existing clustering comparison measures.",
"A clustering represents one label set, and each label is a cluster within the clustering, i.e. all tokens having the same label belong to one cluster.",
"A contingency table, also called a confusion matrix, is a handy tool to compare clusterings.",
"Let us assume that a dataset D is annotated with two labels in parallel from two tasks T and T (cid:48) with arbitrary label sets L and L (cid:48) .",
"The comparison of L with L (cid:48) on D can be transformed into a clustering comparison problem.",
"The clusters for T are the labels l 1 , l 2 , . . . , l N when the label set L has N different labels in total.",
"The clusters for T (cid:48) are labeled analogously l (cid:48) 1 , l (cid:48) 2 , . . . , l (cid:48) M for the M labels in the set L (cid:48) .",
"Table 1 shows the resulting contingency table for the described setting.",
"The values c xy are the counts how many tokens are in the dataset that are labeled as / belong to cluster l x in task T and simultaneously l (cid:48) y in the task T (cid:48) .",
"2 l (cid:48) 1 l (cid:48) 2 . . . l (cid:48) M l 1 c 11 c 12 . . . c 1 M c 1 .",
"Based on the counts in the contingency table, information-theoretic measures such as (joint) entropy or mutual information can be calculated.",
"Because the probability mass functions p ( x ) , p ( y ) and p ( x, y ) are unknown for the label sets L and L (cid:48) in dataset D , the probabilities are approximated by the relative frequencies of the label pairs.",
"The entropy of both label sets has to be taken into account to know whether the tasks T and T (cid:48) are similar, i.e. a normalized mutual information variant shown in Equations 4 and 5 has to be used.",
"With the notation in Table 1, the NMI joint definition becomes NMI ( L, L (cid:48) ) joint = I ( L ; L (cid:48) ) H ( L, L (cid:48) ) = (cid:80) Ni =1 (cid:80) Mj =1 c ij c log 2 (cid:16) c ij c c i.",
"c",
".j (cid:17) (cid:80) Ni =1 (cid:80) Mj =1 c ij c log 2 (cid:0) c ij c (cid:1) .",
"(6) The other measures can be changed analogously.",
"Next, we show how to transform label similarity to clustering comparison without being restricted to datasets annotated in parallel with both label sets.",
"To compare two datasets, one of the datasets can be tagged automatically with the other task's labels as proposed by Bjerva (2017).",
"However, a comparison is only possible if at least one of the tasks can be tagged automatically with near-perfect accuracy.",
"While the necessary performance-level has been reached for a few simple tasks, the state-of-the-art performance on most tasks seems insufficient for this purpose.",
"Further, two datasets of the same task, e.g. two NER datasets with the same tagset, cannot be meaningfully compared when tagged automatically.",
"We propose two approaches to lift the 2 Illustrating examples are provided in Appendix A.1 restrictions on the datasets and tasks.",
"If a manually defined one-to-one mapping from labels of one dataset to another one exists, datasets can be compared to each other using this label mapping function, because it produces a dataset with parallel label sets.",
"While mapping a fine-grained label set to a coarse label set is possible, it is unclear how to map a coarse label to finer sub-labels.",
"The text overlap approach implicitly generates a label mapping from the token-label pairs of both datasets.",
"This has the advantage of being independent of external knowledge and enabling a probabilistic mapping from coarse to fine-grained label sets specific to the datasets.",
"Tokens are aggregated so that a token is associated with the number of times it has been tagged with each label.",
"Only tokens occurring in both datasets can be used to fill in the counts of a contingency table.",
"By looking only at the intersection of tokens occurring in both datasets, a new virtual dataset is created, where each token is tagged with two labels.",
"For each token, the count at the position ( l i , l (cid:48) j ) in the contingency table is increased by a combination of the number of times the current token was tagged with labels l i and l (cid:48) j .",
"With the additive method to fill a contingency table, label counts for words from both datasets are added because they are viewed as multiple instances from one dataset.",
"3 An alternative to addition is to use multiplication to combine the counts for matching words.",
"The counts for each label combination are multiplied and added at the corresponding position in the contingency table.",
"An effect of this approach is that words being frequent in both datasets contribute more to be counts.",
"There are more possible schemes on how to combine the raw counts from two datasets into a mutual contingency table.",
"Similarity measures such as NMI can be computed on any contingency table obtained from these methods.",
"An advantage of the text overlap approach is that it is fast because it only involves text processing and a few counts.",
"The downside is that an identical dataset can only be identified with 100% similarity if each word always has the same label.",
"Another issue is that only a fraction of each dataset is used 3 Illustrating examples are provided in Appendix A.2 for the actual comparison.",
"As the plain text overlap approach does not consider the ratio of shared vocabulary, it is possible to have a false positive, i.e. a high similarity is reported for two datasets although they share only one word.",
"To fix this, we combine the NMI value and the ratio of shared vocabulary (SV) via the harmonic mean into our text overlap (TO) measure T O = 2 NMI SV NMI + SV (7) with the shared vocabulary SV = | V V (cid:48) | | V V (cid:48) | (8) where V and V (cid:48) are the sets of all unique words in the two datasets D and D (cid:48) .",
"When constructing the contingency table (e.g. Table 1) with the text overlap approach, the sequence information of label-word pairs, i.e. the context, cannot be captured in the counts.",
"With the usage of contextual embeddings, this issue can be mitigated sufficiently.",
"Word embeddings allow representing words in the form of dense vectors within a vector space instead of a specific character sequence in the lan-guage's vocabulary.",
"Thus, it is possible to perform mathematical operations on these vectors and compute e.g. the semantic similarity of two words by computing their cosine similarity within the vector space (Elekes et al., 2017).",
"These word vector techniques can be used to tackle the problems of the previously shown text overlap approach.",
"A first extension allows incorporating words not occurring in both datasets.",
"Vector representations are obtained for each unique word in the datasets.",
"Instead of ignoring words contained only in one dataset, the closest word from the other dataset is chosen via cosine similarity for the pairwise label comparison.",
"The remaining process and similarity measure computation stays the same.",
"4 In the vector space approach, all tokens are compared.",
"For each token, a unique vector representation is obtained via contextual embeddings such as ELMo (Ilic et al., 2018) or BERT (Devlin et al., 2019).",
"In order to fill in the counts of a contingency table, each token from one dataset is matched with the most similar vector representation in the other 4 Illustrating examples are provided in Appendix A.3 dataset and the count for the label-pair is increased by the vector space similarity of the two tokens.",
"4 The usage of contextual embeddings allows to incorporate the sequence information of label-word pairs into the counts.",
"A similarity measure like NMI can be calculated from these counts as before.",
"Identical datasets can be scored with 100% similarity when the contextual embeddings are able to produce unique vector representations for each token.",
"In general, this method handles ambiguity in language much better as compared to the plain text approach, which should help to improve the similarity comparison between various datasets.",
"Because the process of selecting the closest vector representation from the main dataset to the auxiliary dataset or vice versa can result in different combinations, the counts in the contingency table will be different depending on the direction.",
"Thus, for a symmetric similarity measure like NMI, two scores are obtained.",
"We further combine the forward and backward direction using the harmonic mean into a unified undirectional embedding (UUE) measure: UUE = 2 NMI forward NMI backward NMI forward + NMI backward (9) The forward and backward NMI in Equation 9 use the same NMI formula and applies it to different counts obtained from the two directions of embeddings comparisons.",
"In our experiments, the actual NMI formula is either NMI max or NMI joint due to their desirable theoretical properties.",
"In this section, experiments will be performed to check whether the similarity of two datasets correlates with the effect on the MTL performance when using the second dataset as auxiliary training data.",
"Before the similarity measures are evaluated together with the MTL performance, we evaluate them independently in a controlled environment.",
"We perform a sanity check by comparing the similarity scores with the intuitive, expected outcome.",
"Two POS tagging datasets (WSJ, EWT) and two NER datasets (CNLE, ONT) shown in Table 2 will be used to sample three new, non-overlapping datasets each.",
"The samples are named e.g. WSJ-1, WSJ-2, and WSJ-3.",
"Their sizes are equal to 1 6 , 2 6 and 3 6 of the original number of tokens.",
"Under the assumption that the similarity within samples from the same original dataset is higher than the similarity between samples from different datasets, the pairwise NMI scores can be qualitatively evaluated.",
"Figure 1 shows the pairwise NMI joint similarity scores obtained with Equation 6 between these twelve samples.",
"The pairs of identical datasets create a visible diagonal line of maximal similarity.",
"The visible 3 3 blocks along the diagonal show high similarity scores and are aligned with comparisons of samples within the same original dataset.",
"Per row or column, the values within these blocks are higher than any other value outside.",
"Thus, the NMI joint score allows identifying other samples of the same original datasets.",
"Another interesting property is that the similarity between samples of the two original POS tagging datasets (WSJ, EWT) is higher than the similarity between any POSNER pair.",
"The same is true the other way around for the NER dataset samples (CNLE, ONT).",
"Hence, the NMI joint score can be used to distinguish datasets of the same task from others.",
"Note that all four original datasets use different tagsets with a greatly varying number of tags (see Table 2) and that neither the shared vocabulary nor the joint label entropy can be employed to distinguish the POS and NER samples correctly.",
"5 Overall, the NMI joint scores presented in Figure 1 agree with the intuition which dataset sam-5 See Figures 3 and 4 in Appendix A.4 for details.",
"ples should be similar.",
"For each row or column, the similarity values can be ordered descending by identical, same original dataset, same task, and other samples.",
"Experiments to correlate dataset similarity and the network's multi-task learning performance will be performed",
"a) using two neural network architectures with Softmax and conditional random field classifiers,",
"b) for the tasks of POS tagging, NER, and AM,",
"c) on multiple datasets per task.",
"Table 2 shows the datasets used in the experiments.",
"Similar to Yang et al. (2017), we sample new training datasets as subsets of the originals to show a larger influence of auxiliary data as there is no room for improvement for simple tasks on large training sets.",
"For the auxiliary datasets, subsets of different sizes are sampled to allow a fair comparison of the performance effect.",
"The standard development and test sets of the original datasets are used if available.",
"Otherwise, random samples without overlap with any other subsampled dataset are used.",
"From the POS tagging datasets, a new training dataset of 25 000 tokens is sampled for WSJ, BC, and EWT.",
"From all POS tagging datasets, auxiliary datasets of increasing size are sampled containing 25 , 50 , 100 , 250 , 500 , 1000 1000 tokens limited by the size of the original dataset.",
"For NER, training sets of 50 000 tokens are sampled from all datasets except GMB, SEC, and WNUT.",
"Auxiliary datasets containing 50 , 100 , 250 1000 tokens are created for all datasets whenever possible.",
"For AM, we use the full PE and WD datasets for training and as auxiliary data.",
"We sample auxiliary data from the IBM data equal in size to the others.",
"As the primary concern of the experiments is to enable significant differences in the neural network results with different auxiliary datasets, the network shares most of its parameters.",
"In order to allow every training and auxiliary dataset combination to use their full potential, all relevant hyperparameters are tested for each pair of training and auxiliary dataset similar to Schulz et al. (2018).",
"The neural network architecture for the experiments uses hard parameter sharing with a bidirectional gated recurrent unit (GRU) (Cho et al., 2014), a simpler version of the long short-term memory (Hochreiter and Schmidhuber, 1997), that is commonly used in MTL sequence tagging works ID Dataset Reference Tokens Tags STL performance PART-OF-SPEECHTAGGINGDATASETSBNC British National Corpus BNC Consortium (2007) 111973625 91 WSJ Penn Treebank Wall Street Journal Marcus et al. (1999) 1286980 45 86 .",
"(see Section 2.1).",
"Apart from self-learned word embeddings, character features based on another bidirectional GRU are included.",
"Similar to Plank et al. (2016); Martnez Alonso and Plank (2017); Bjerva (2017); Ruder et al. (2019) we decided against pre-trained word embeddings in the network to avoid any influence on the comparison of STL and MTL performance.",
"The last two, task-specific layers transform the GRU's hidden state to the task-specific labels and apply either a Softmax or conditional random field (CRF) (Lafferty et al., 2001) to predict the label.",
"6 Auxiliary data is only used for the same task, i.e. no POS tagging dataset is used as auxiliary training data for NER and vice versa.",
"For POS tagging, 81 pairs of training and auxiliary datasets are tested with 64 hyperparameter combinations and three random seeds.",
"In the case of NER, 117 pairs of training and auxiliary datasets are tested with two neural network models, 16 hyperparameter combinations, and three random seeds.",
"In total, 26 784 training runs have been performed.",
"We compute the similarities for pairs of training and auxiliary datasets in three ways.",
"The text overlap approach is used with and without word embeddings.",
"For the latter, 300-dimensional fastText 6 Training procedure and hyperparameters are described in more detail in Appendix A.5 embeddings 7 with sub-word information are used that consist of 2 million word vectors trained on the Common Crawl (Mikolov et al., 2018).",
"We evaluate the additive and multiplicative ways with multiple weighting schemes to combine the label counts and calculate various similarity measures from the resulting contingency table.",
"The BERT-Base Multilingual Cased model (Devlin et al., 2019) is used for the third, token-based approach.",
"In Figure 2, the difference in accuracy over STL is plotted against the UUE NMI joint similarity measure using BERT embeddings.",
"Overall, the data points are scattered from the bottom left to the top right.",
"There are no cases of low similarity coinciding with high accuracy increase.",
"The data points with auxiliary data from the German GSD dataset are clustered close to the bottom left, i.e. low similarity and almost no accuracy gain.",
"This concurs with the intuition that using a German auxiliary dataset for an English training dataset should not lead to a significant performance increase.",
"The data points with auxiliary data from the same original dataset as the training set are clustered to the top right, i.e. have the highest similarity and performance increase as expected.",
"The scatter plots for 7 crawl300d2Msubword.zip from fasttext.cc other sizes of auxiliary data and methods, e.g. computing NMI max on the contingency table from the text overlap approach, look similar.",
"To quantify the various similarity computation methods, we correlate the change in accuracy with the similarity value.",
"Table 3 shows the median and mean correlation of similarity with change in accuracy for the best ten methods averaged over groups of identically-sized auxiliary datasets.",
"As a baseline, the correlation with the ratio of shared vocabulary is included.",
"We only show the results for NMI joint as the correlation was equal to or better than NMI max in most cases.",
"The correlation between the similarity and change in accuracy is strong according to both Kendall's rank correlation and Pearson's linear correlation coefficients, which is in line with the plot shown in Figure 2.",
"Since the p -values for the similarity methods are well below 0 .",
"005 , it is very unlikely that similarity and accuracy are not correlated.",
"The strongest correlation, according to Kendall's , is achieved with the harmonic mean of shared vocabulary and multiplicative text overlap.",
"According to Pearson's , the highest linear correlation is achieved with the UUE (Equation 9) vector space method, which is depicted in Figure 2.",
"The correlation coefficients of the text overlap approach are consistently higher than the shared vocabulary baseline since the baseline is oblivious to the labels.",
"For NER, the results are shown in Table 4.",
"In comparison to the POS tagging results, methods using embeddings perform better than those without.",
"The strongest Kendall and Pearson correlations are achieved by the vector space approach computing the joint NMI on a contingency table filled from forward BERT embeddings.",
"While a linear correlation on the POS tagging results was deemed reasonable based on a data analysis, the Pearson correlation values for NER might be prone to outlier effects and are therefore only included for completeness.",
"For AM, no quantitative analysis could be performed due to a limited number of samples.",
"With MTL, the performance on PE increased to 54 .",
"26 when using WD as auxiliary data, while IBM reduced it to 51 .",
"37 .",
"WD performance is slightly reduced by PE as auxiliary data to 21 .",
"72 , but reduced to 9 .",
"42 by IBM.",
"While we saw no correlation with the text overlap similarities, the forward vector space measure matches the MTL score change Primary method Combination Count method Embedding Kendall's Pearson's text overlap & SV TO multiplicative -0.73 0 .",
"when comparing averaged span embeddings: The NMI joint similarity of PEIBM is 0 .",
"09 , and PE-WD is measured 0 .",
"26 whereas WDPE has a similarity score of 0 .",
"06 and WDIBM is scored 0 .",
"04 .",
"Thus, our similarity measure identifies the most promising auxiliary dataset also in this case.",
"Overall, there is a strong correlation between MTL scores and dataset similarity computed by our proposed methods.",
"In the case of POS tagging, the correlation is impressive it is visible in the scatter plot and accompanied by high-confidence correlation coefficients.",
"The results for NER are less clear but still indicate that similarity and test set performance are correlated.",
"We can recommend the text overlap approach combined with the shared vocabulary for syntactic tasks with single-token labels.",
"It performed the best in our POS tagging evaluation and is computed in less than a second.",
"Both additive and multiplicative count combination methods worked equally well in our tests.",
"For more complex tasks such as NER or AM and in case labels span multiple tokens, we suggest using the approach based on the forward vector space similarity.",
"It performed the best in our NER evaluation.",
"Further, it was the only method to work reasonably well with the AM datasets because spans of multiple tokens could be compared by combining the embeddings of all contained tokens.",
"In all cases, we recommend using the mutual information normalized by the joint entropy NMI joint as the actual similarity measure because it was either equal to or better than the other variants.",
"The similarity measures allow distinguishing good from bad candidates for usage as auxiliary data.",
"This is an immensely valuable information as the number of expensive neural network training runs can be reduced to a fraction while still finding the best auxiliary dataset(s) to increase performance on the main task.",
"In contrast to previous methods, our measures do not require the label sets to be the same and do not require automatic tagging.",
"The experiments show that similarity measures allow ordering the effects of auxiliary datasets by direction and intensity for an individual training dataset.",
"Our experimental findings are also supported from a theoretical point of view.",
"The developed methods working on both words and their labels have a substantial advantage over approaches that are based only on words or the label distributions.",
"The quick similarity calculation can improve the main task performance when better datasets are used as auxiliary data that would never have made it through the otherwise purely manual preselection process.",
"In future work, apart from improving the similarity measures, it could be examined to predict MTL scores or estimate the right amount of auxiliary data or shared parameters in the neural network.",
"We would like to thank all anonymous reviewers for their valuable feedback.",
"This work was partially funded by the Cluster of Excellence CLICCS (EXC 2037), Universitat Hamburg, funded through the German Research Foundation (DFG)."
] | [
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Five years after the first published proofs of concept, direct approaches to speech translation (ST) are now competing with traditional cascade solutions.",
"In light of this steady progress, can we claim that the performance gap between the two is closed?",
"Starting from this question, we present a systematic comparison between state-of-the-art systems representative of the two paradigms.",
"Focusing on three language directions (English German/Italian/Spanish), we conduct automatic and manual evaluations, exploiting high-quality professional post-edits and annotations.",
"Our multi-faceted analysis on one of the few publicly available ST benchmarks attests for the first time that:",
"i) the gap between the two paradigms is now closed, and",
"ii) the subtle differences observed in their behavior are not sufficient for humans neither to distinguish them nor to prefer one over the other.",
"Speech translation (ST) is the task of automatically translating a speech signal in a given language into a text in another language.",
"Research on ST dates back to the late eighties and its evolution followed the development of the closely related fields of speech recognition (ASR) and machine translation (MT) that, since the very beginning, provided the main pillars for building the so-called cascade architectures.",
"With the advent of deep learning, the neural networks widely used in ASR and MT have been adapted to develop a new direct ST paradigm.",
"This approach aims to overcome known limitations of the cascade one (e.g. architectural complexity, error propagation) with a single encoder-decoder architecture that directly translates the source signal bypassing intermediate representations.",
"Until now, the consolidated underlying technologies and the richness of available data have upheld the supremacy of cascade solutions in industrial applications.",
"However, architectural simplicity, reduced information loss and error propagation are the ace up the sleeve of the direct approach, which has rapidly gained popularity within the research community in spite of the critical bottleneck represented by data paucity.",
"Within a few years after the first proofs of concept (Berard et al., 2016; Weiss et al., 2017), the performance gap between the two paradigms has gradually decreased.",
"This trend is mirrored by the findings of the International Workshop on Spoken Language Translation (IWSLT), 1 a yearly evaluation campaign where direct systems made their first appearance in 2018.",
"On English-German, for instance, the BLEU difference between the best cascade and direct models dropped from 7.4 points in 2018 (Niehues et al., 2018) to 1.6 points in 2019 (Niehues et al., 2019b).",
"In 2020, participants were allowed to choose between processing a pre-segmented version of the test set or the one produced by their own segmentation algorithm.",
"As reported in (Ansari et al., 2020), the distance between the two paradigms further decreased to 1.0 BLEU point in the first condition and, for the first time, it was slightly in favor of the best direct model in the second condition, with a small but nonetheless meaningful 0.24 difference.",
"So, quoting Ansari et al. (2020), is the cascade solution still the dominant technology in ST?",
"Has the direct approach closed the huge initial performance gap?",
"Are there systematic differences in the outputs of the two technologies?",
"Are they distinguishable?",
"Answering these questions is more than running an evaluation exercise.",
"It implies pushing research towards a deeper investigation of direct 1 http://iwslt.org ST, finding a path towards its wider adoption in industrial settings and motivating higher engagement in data exploitation and resource creation to train the data-hungry end-to-end neural systems.",
"For all these reasons, while Ansari et al. (2020) were cautious in drawing firm conclusions, in this paper we delve deeper into the problem with the first thorough comparison between the two paradigms.",
"Working on three language directions (ende/es/it), we train state-of-the-art cascade and direct models ( 3), running them on test data drawn from the MuST-C corpus (Cattoni et al., 2020).",
"Systems' behavior is analysed from different perspectives, by exploiting high-quality post-edits and annotations by professionals.",
"After discussing overall systems' performance ( 4), we move to more fine-grained automatic and manual analyses covering two main aspects: the relation between systems' performance and specific characteristics of the input audio ( 5), and the possible differences in terms of lexical, morphological and word ordering errors ( 6).",
"We finally explore whether, due to latent characteristics overlooked by all previous investigations, the output of cascade and direct systems can be distinguished either by a human or by an automatic classifier ( 7).",
"Together with a comparative study attesting the parity of the two paradigms on our test data, another contribution of this paper is the release of the manual post-edits that rendered our investigation possible.",
"The data is available at: https://ict.fbk.eu/mustc-post-edits .",
"Cascade ST. By concatenating ASR and MT components (Stentiford and Steer, 1988; Waibel et al., 1991), cascade ST architectures represent an intuitive solution to achieve reasonable performance and high adaptability across languages and domains.",
"At the same time, however, they suffer from well-known problems related to the concatenation of multiple systems.",
"First, they require ad-hoc training and maintenance procedures for the ASR and MT modules; second, they suffer from error propagation and from the loss of speech information (e.g. prosody) that might be useful to improve final translations.",
"Research has focused on mitigating error propagation by:",
"i) feeding the MT system with ASR data structures (e.g. ASR n-best, lattices or confusion networks) which are more informative than the 1-best output (Lavie et al., 1996; Matusov et al., 2005; Bertoldi and Federico, 2005; Beck et al., 2019; Sperber et al., 2019), and",
"ii) making the MT robust to ASR errors, for instance by training it on parallel data incorporating real or emulated ASR errors as in (Peitz et al., 2012; Ruiz et al., 2015; Sperber et al., 2017; Cheng et al., 2019; Di Gangi et al., 2019a).",
"Although the former solutions are effective to some extent, state-of-the-art cascade architectures (Pham et al., 2019; Bahar et al., 2020) prefer the latter, as they are simpler to implement and maintain.",
"Direct ST. To overcome the limitations of cascade models, Berard et al. (2016) and Weiss et al. (2017) proposed the first direct solutions bypassing intermediate representations by means of encoder-decoder architectures based on recurrent neural networks.",
"Currently, more effective solutions (Potapczyk and Przybysz, 2020; Bahar et al., 2020; Gaido et al., 2020) rely on ST-oriented adaptations of Transformer (Vaswani et al., 2017) integrating the encoder with:",
"i) convolutional layers to reduce input length, and",
"ii) penalties biasing attention to local context in the encoder self-attention layers (Povey et al., 2018; Sperber et al., 2018; Di Gangi et al., 2019b).",
"Though effective, these architectures have to confront with training data paucity, a critical bottleneck for neural solutions.",
"The problem has been mainly tackled with data augmentation and knowledge transfer techniques.",
"Data augmentation consists in producing artificial training corpora by altering existing datasets or by generating ( audio , translation ) pairs through speech synthesis or MT (Bahar et al., 2019b; Nguyen et al., 2020; Ko et al., 2015; Jia et al., 2019).",
"Knowledge transfer (Gutstein et al., 2008) consists in passing (here to ST) the knowledge learnt by a neural network trained on closely related tasks (here, ASR and MT).",
"Existing ASR models have been used for encoder pre-training (Berard et al., 2018; Bansal et al., 2019; Bahar et al., 2019a) and multi-task learning (Weiss et al., 2017; Anastasopoulos and Chiang, 2018; Indurthi et al., 2020).",
"Existing neural MT models have been used for decoder pre-training (Bahar et al., 2019a; Inaguma et al., 2020), joint learning (Indurthi et al., 2020; Liu et al., 2020) and knowledge distillation (Liu et al., 2019).",
"Previous comparisons.",
"Most of the works on direct ST also evaluate the proposed solutions against a cascade counterpart.",
"The conclusions, however, are discordant.",
"Looking at recent works, Pino et al. (2019) show similar scores, Indurthi et al. (2020) report higher results for their direct model, while Inaguma et al. (2020) end up with the opposite finding.",
"The main problems of these comparisons are that:",
"i) not all the architectures are equally optimized,",
"ii) for the sake of fairness in terms of training data, cascade systems are restricted to unrealistic settings with small training corpora that penalize their performance, and",
"iii) evaluation always relies only on automatic metrics computed on single references.",
"The IWSLT campaigns (Niehues et al., 2019a; Ansari et al., 2020) set up a shared evaluation framework where systems built on a large set of training data are optimized to achieve the best performance, independently from the underlying architecture.",
"In the last round, direct models approached, and in one case (Potapczyk and Przybysz, 2020) outperformed, the cascade ones.",
"However, the evaluation was run only on one language pair, by solely relying on automatic metrics and single references.",
"In this paper, we overcome these limitations by comparing the two paradigms on three language pairs, using different metrics, multiple references (including professional post-edits) as well as fine-grained automatic and manual analysis procedures.",
"To maximize the cross-language comparability of our analyses, we built the cascade and direct ST systems for ende/es/it with the same core technology, based on Transformer.",
"Their good quality is attested by the comparison with the winning system at the IWSLT-20 offline ST task (Bahar et al., 2020), 2 which consists of an ensemble of two cascade models scoring 28.8 BLEU on the en-de portion of the MuST-C Common test set.",
"On the same data, our cascade and direct models achieve similar BLEU scores, respectively 28.9 and 29.1 (see Table 1).",
"3 On en-es and en-it, identical architectures perform similarly or better (up to 32.9 BLEU on en-es).",
"Although BLEU scores are not strictly comparable across languages, we can safely consider all our models as state-of-the-art.",
"2 In the pre-segmented data condition (Ansari et al., 2020).",
"3 Also the ASR performance of our cascade solution (10.2 WER on MuST-C Common ) is in line with the results obtained by Bahar et al. (2020) for their best ASR model.",
"Data.",
"Our evaluation data is drawn from the TED-based MuST-C corpus (Cattoni et al., 2020), the largest freely available multilingual corpus for ST. It covers 14 language directions, with English audio segments automatically aligned with their corresponding manual transcriptions and translations.",
"The ende/es/it MuST-C Common test sets contain the same 27 TED talks, for a total of around 2,500 segments largely overlapping across languages.",
"4 For all the three language pairs, we selected subsets of MuST-C Common containing the same English audio portions from each talk, in order to obtain representative groups of contiguous segments that are comparable across languages.",
"Furthermore, to ensure high data quality, we manually checked the selected samples and kept only those segments for which the audio-transcript-translation alignment was correct.",
"Each of the three resulting test sets henceforth PE-sets is composed of 550 segments, corresponding to about 10,000 English source words.",
"Post-editing.",
"A key element of our multi-faceted analysis is human post-editing (PE), which consists in manually correcting systems' output according to the input (the source audio in our case).",
"In PE-based evaluation, the original output is compared against its post-edited version using distance-based metrics like TER (Snover et al., 2006).",
"This allows for counting only the true errors made by a system, without penalising differences due to linguistic variation as it happens when exploiting independent references.",
"This makes PE-based evaluation one of the most prominent methodologies used for translation quality assessment (Snover et al., 2006, 2009; Denkowski and Lavie, 2010; Cettolo et al., 2013; Bojar et al., 2015; Graham et al., 2016; Bentivogli et al., 2018b).",
"To collect the post-edits for our study, we strictly followed the methodology of the IWSLT 2013-2017 evaluation campaigns (Cettolo et al., 2013), which offered us a consolidated framework and best practices to draw upon.",
"Our cascade and direct systems were both run on the PE-sets to be post-edited.",
"To guarantee high quality post-edits, for each language we hired two professional translators with experience in subtitling and post-editing.",
"Moreover, in order to cope with translators' vari-4 MuST-C Common segments can vary across languages due to the automatic procedures of segmentation, audio-text alignment and filtering that were applied to the talks.",
"ability (i.e. more/less aggressive editing strategies), the outputs of the two ST systems were randomly assigned ensuring that each translator worked on all the 550 segments, post-editing an equal number of outputs from both systems.",
"The task was performed with a CAT tool 5 that displays the manual transcript of the audio together with the ST output to be edited.",
"However, since ST systems take as input an audio signal, we also provided translators with the audio file of each segment, asking them to post-edit strictly according to it.",
"6 For each language pair, the final PE-set used in our study consists of the 550 MuST-C original audio-transcript-translation triplets plus two additional sets of reference translations, i.e. the post-edited versions of the two systems' outputs.",
"Analyses.",
"The collected post-edits are exploited to assess overall systems' performance ( 4) as well as to carry out deeper quantitative and qualitative analyses aimed to shed light on possible systematic differences in systems' behavior ( 5.1 and 6.1).",
"Focusing on specific aspects of the ST problem, the inquiry is also performed by means of manual annotation of systems' outputs ( 5.2, 6.2 and 7.1).",
"Due to the linguistic nature of this task, centred on fine-grained aspects requiring a variety of skills in both evaluation and ST technology, for such analyses we relied on three researchers in translation technology one per language pair with a strong background in linguistics, excellent knowledge of the addressed languages (C2 or native), as well as strong expertise in systems' evaluation.",
"We compute overall performance results both on the PE-sets and on the MuST-C Common test sets.",
"Our primary evaluation is based on the collected post-edits.",
"We consider two TER-based 7 metrics:",
"i) human-targeted TER (HTER) computed between the automatic translation and its human post-edited version, and",
"ii) multi-reference TER (mTER) computed against the closest reference among the three available ones (two post-edits and the official reference from MuST-C).",
"The latter metric better accounts for post-editors' variability, making the evaluation more reliable and informative.",
"For the sake of completeness, in Table 1 we also report Sacre-5 www.matecat.com 6 The ad-hoc ST PE guidelines given to translators are included in Appendix B. 7 www.cs.umd.edu/snover/tercom BLEU 8 (Post, 2018) and TER scores computed only on the official MuST-C Common references.",
"A bird's-eye view of the results shows that, in more than half of the cases, performance differences between cascade and direct systems are not statistically significant.",
"When they are, the raw count of wins for the two approaches is the same (4), attesting their substantial parity.",
"Looking at our primary metrics (HTER and mTER), systems are on par on en-it and en-de, while for en-es the direct approach significantly outperforms the cascade one.",
"This difference, however, does not emerge with the other metrics.",
"Indeed, BLEU and TER scores computed against the official references are less coherent across metrics and test sets.",
"For instance, on the en-it PE-set the cascade system significantly outperforms the direct one in terms of BLEU score, while TER shows the opposite on MuST-C Common .",
"Interestingly, the scores obtained using independent references can also disagree with those computed with post-edits.",
"This is the case of en-es, where significant HTER and mTER reductions attest the superiority of the direct system, while most BLEU and TER scores are still in favor of the cascade.",
"On the one hand, primary evaluation scores suggest that the rapidly advancing direct technology has eventually reached the traditional cascaded approach.",
"On the other, the highlighted incongruities confirm widespread concerns about the reliability of fully automatic metrics based on independent references to properly evaluate neural systems (Way, 2018).",
"This calls for deeper quantitative and qualitative analyses.",
"Those presented in the next sections investigate performance differences focusing on two main aspects: the impact of specific input audio properties ( 5), and the linguistic errors made by the systems ( 6).",
"8 BLEU+c.mixed+#.1+s.exp+tok.13a+v.1.4.3 5 ST Quality and Audio Properties 5.1 Automatic Analysis The two ST approaches handle the input audio differently: the cascade one by means of a dedicated ASR component that produces intermediate transcripts; the direct one by extracting all the relevant information to translate in an end-to-end fashion.",
"Is it therefore possible that some audio properties have different impact on their results?",
"Overall performance being equal, answering this question would help to understand if one approach is preferable over the other under specific audio conditions.",
"Among other possible factors (e.g. noise, recording conditions, overlapping speakers) we tried to shed light on this aspect by focusing on two common factors: audio duration and speech rate.",
"To this aim, we grouped the sentences in the PE-set according to the sentence-wise HTER percentage difference i.e. the difference between the cascade and direct HTER scores divided by their average.",
"The threshold for considering performance differences as significant was set to 10%.",
"The resulting groups contain sentences where:",
"i) cascade is significantly better than direct,",
"ii) direct is significantly better than cascade,",
"iii) the difference between the two is not significant, and",
"iv) both systems have HTER=0.",
"For each group, we calculated the average audio duration and the corresponding speech rate in terms of phonemes 9 per second.",
"Results are shown in Table 2, where for the sake of completeness also the length of the reference audio transcript is given, together with the average HTER of the systems.",
"As we can see, results are coherent across languages: audio duration and speech rate averages do not differ, neither when one system performs significantly better than the other, nor when the HTER differences are not significant.",
"We can hence conclude that, if audio duration and speech rate have any influence on systems' performance, our analysis does not highlight specific conditions that are more favorable to one approach than to the other.",
"Both are equally robust with respect to the audio properties here considered.",
"Handling the input audio differently, the two approaches have inherent strengths and weaknesses.",
"In particular, although suffering from the wellknown scarcity of sizeable training corpora, direct solutions come with the promise (Sperber and Paulik, 2020) of:",
"i) higher robustness to error propagation, and",
"ii) reduced loss of speech information (e.g. prosody).",
"Our next qualitative analysis tries to delve into these aspects by looking at audio understanding and prosody issues.",
"Audio understanding.",
"Errors due to wrong audio understanding are easy to identify for cascade systems since they are evident in the intermediate ASR transcripts but harder to spot for direct systems, whose internal representations are by far less accessible.",
"In this case, errors can still be identified in mistranslations corresponding to words which are phonetically similar to parts of the input audio e.g. nice voice mistranslated in German as nette Jungen ( nice boys ).",
"To spot such errors, our annotators carefully inspected the PE-set by comparing the audio, the reference transcripts and systems' output translations for both the cascade and direct models, as well as the ASR transcripts for the cascade one.",
"Some interesting examples of the identified errors are reported in Table 3.",
"As shown in Table 4, audio understanding errors are quite common for both systems in all language pairs.",
"However, both the number of errors and the number of sentences they affect is significantly lower for the direct one.",
"We observed that this is the case especially for more difficult sentences, such as sentences with poor audio quality and overlapping or disfluent speech.",
"Though far from being conclusive (we acknowledge that, due to the opacity of direct models, their error counts might be slightly underestimated), this analysis seems to confirm the theoretical advantages of direct ST. This finding advocates for more thorough future investigations on neural networks' interpretability, targeting its empirical verification on larger and diverse benchmarks.",
"Prosody.",
"Prosody is central to disambiguating utterances, as it reflects language elements which may not be encoded by grammar and vocabulary choices.",
"While prosody is directly encoded by the direct system, it is lost in the unpunctuated input received by the MT component of a cascade.",
"Besides few interrogative sentences, our annotators were able to isolate only a handful of utterances whose prosodic markers result in different interpretations by the two models.",
"Concerning interrogatives, both systems managed to translate them correctly in most cases (24 for cascade and 25 for direct out of 31).",
"This is not surprising given the syntactic structure of English questions, which is explicit and does not rely solely on prosody (e.g. compared to Italian).",
"In all other cases (examples in Table 5), the direct model's higher sensitivity to prosody seems to give it an edge on cascade in disambiguating and correctly rendering the utterance meaning.",
"Also this finding calls for future inquiries aimed to check the regularity of these differences on larger datasets.",
"For this analysis, we rely on the publicly available tool 10 used by Bentivogli et al. (2018a) to analyse",
"src nation states governments doing the attacks C Regierungen der Nationalstaaten [governments of nation states] D Nationen, Regierungen [nations, governments] src like the one we saw before, moving C como el que vimos antes de moverse [like the one we saw before moving] D como el que hemos visto antes, moviendose [like the one we saw before, moving] src Photos like this: construction going on C Foto come questa costruzione [Photos like this construction] D Foto come queste: costruzione [Photos like these: construction]",
"en-de en-es en-it C D % C D % C D % L 2481 2560 +3.2 2674 2497 -6.6 2264 2264 0.0 M 468 536 +14.5 535 494 -7.7 433 470 +8.6 R 398 476 +19.6 308 290 -5.8 230 226 -1.7 3347 3572 +6.7 3517 3281 -6.7 2927 2960 +1.1 Table 6: Distribution of (L)exical, (M)orphological and (R)eordering errors.",
"what linguistic phenomena are best modeled by MT systems.",
"The tool exploits manual post-edits and HTER-based computations to detect and classify translation errors according to three linguistic categories: lexicon, morphology and word order.",
"Table 6 presents their distribution.",
"As expected from the HTER scores in Table 1, results vary across language pairs.",
"On en-it, systems show pretty much the same number of errors, with a slight percentage gain (+1.1) in favor of the cascade.",
"For the other two pairs, differences are more marked and opposite, with an overall error reduction for the direct system on en-es (-6.7) and in favor of the cascade on en-de (+6.7).",
"Looking at the distribution of errors across categories, while for en-es the direct system is always better and the percentage reduction is homogeneously distributed, for en-de the better performance of the cascade is concentrated in the morphology and word order categories.",
"Since English and German are the most different languages in terms of morphology and word order, this result suggests that cascade systems still have an edge on the direct ones in their ability to handle morphology and word reordering.",
"This is further supported by en-it: the only difference, in favor of the cascade, is indeed observed in the morphology category.",
"Since lexical errors represent by far the most frequent category for both approaches in all language pairs, we complement the automatic analysis with a more fine-grained manual inspection, further distinguishing among lexical errors due to missing words, extra words, or wrong lexical choice.",
"11 The analysis was carried out on subsets of the PE-set, created in such a way to be suitable for manual annotation.",
"Namely, we removed sentences for which the output of the two systems is:",
"i) identical,",
"ii) judged correct by post-editors (HTER=0), or",
"iii) too poor to be reliably annotated for errors (HTER > 40%).",
"The resulting sets contain 207 sentences for en-de, 238 for en-es, and 285 for en-it.",
"This analysis reveals that, for all language pairs, wrong lexical choice is the most frequent error type ( 65% of lexical errors on average) followed by missing words ( 30%), and extra words ( 5%).",
"While errors due to lexical choice and superflu-ous words vary across languages, we observe a systematic behavior with respect to missing words (words that are present in the audio but are not translated).",
"As we can see in Table 7, direct systems lose more information from the source input than their cascade counterparts, in terms of both single words and contiguous word sequences.",
"It is particularly interesting to notice that also for en-es where the direct system is significantly stronger than the cascade the issue is still evident, although to a lesser extent.",
"Table 8 collects examples of the encountered lexical phenomena.",
"Finally, we report that a non-negligible amount of missing words (between 10% and 20%) is represented by discourse markers, i.e. words or phrases used to connect and manage what is being said (e.g. you know, well, now).",
"Although this is 11 Various error taxonomies covering different levels of granularity have been developed, and the distinction between these types of lexical errors is widely adopted, including the DQF-MQM framework https://info.taus.net/ dqf-mqm-error-typology-templ AUDIO That's fine, says George, C Das ist in Ordnung. [ ] George, D Das ist in Ordnung, [ ] George, AUDIO Well after two years, ...",
"a frequent phenomenon in speech, not translating discourse markers cannot be properly considered as an error, since markers",
"i) do not carry semantic information, and",
"ii) can be intentionally dropped in some use cases, such as in subtitling.",
"So far, our inquiry has been entirely driven by pre-defined assumptions (the importance of certain audio properties) and linguistic criteria (the focus on specific error types).",
"This top-down approach, however, might fail to disclose important differences, which were not specifically sought after when analysing the two paradigms.",
"This consideration motivates the adoption of the complementary bottom-up approach that concludes our comparative study by answering the question: is the output of cascade and direct systems distinguishable?",
"Understanding if and why discriminating between the two is possible would not only suggest new issues to look at.",
"It would also highlight possible output regularities that, despite the similar overall performance, make one paradigm preferable over the other in specific application scenarios.",
"To this aim, we set up a classification experiment, comparing the ability of humans to correctly identify the output of the two systems with the performance of an automatic text classifier.",
"After getting acquainted with systems' output through the previous manual analyses, our assessors were instructed to perform a classification task.",
"The classification had to be performed on 10 blocks of items comprising a set of unseen English contiguous sentences (gold transcripts) from the MuST-C Common test set, and two sets of anonymized translations, one produced by the cascade and one by the direct model.",
"For each block, the assessors had to assign each set of translations to the correct system, or label them as indistinguishable.",
"To investigate whether more context helps in the assignment, we set up two experiments with respectively 10 and 20 contiguous sentences per block.",
"The results in Table 9 show that en-es and en-it systems are not distinguishable, since only a maximum of 4 blocks out of 10 were correctly classified, while most en-de blocks were correctly classified.",
"According to the en-de assessor, this is due to the fact that the structure of the sentences generated by the direct system is very similar to that of the corresponding English sources.",
"This characteristic stands out in German, which differs from English in terms of word order more than Italian and Spanish.",
"This type of behavior does not necessarily imply the presence of errors but, like a fingerprint, makes the en-de direct system more recognizable by a human.",
"Furthermore, being sub-optimal for German, this structure can cause preferential edits by the post-editors, which would be in line with the concentration of errors in the word order category observed in Table 6 (+19.6%).",
"Assessing the importance of context, the ability of humans to distinguish the systems does not improve when passing from 10 to 20 sentences per block.",
"This suggests that the behavioral differences between cascade and direct systems are so subtle that, on larger samples, they mix up and balance making their fingerprints less traceable.",
"As a complement to the human classification experiment, we check whether an automatic tool is able to accomplish a similar task.",
"Our classifier combines n -gram language models with the Naive Bayes algorithm, as proposed in (Peng and Schu-urmans, 2003).",
"We trained two 5-gram models, respectively using translations by the cascade and the direct systems.",
"At classification time, given a translated text, the classifier computes the perplexity of the two models and assigns the cascade or direct label based on the model with the lowest perplexity.",
"Also these experiments were carried out on the MuST-C Common set.",
"The classifier was tested via k-fold cross-validation, for different values of k i.e. different sizes of text to classify.",
"As shown in Figure 1, contrary to humans, the more data the classifier receives, the higher its accuracy in discriminating between systems.",
"Already at a size of 20 sentences, accuracy is always 80%.",
"This suggests that systems have their own lan-guage, a fluency-related fingerprint.",
"To check this finding, we measured outputs' lexical diversity in terms of moving average Type-Token Ratio maTTR (Covington and McFall, 2010) and with the Measure of Textual Lexical Diversity (MTLD) by McCarthy and Jarvis (2010).",
"Table 10 shows that the cascade output exhibits higher lexical diversity on all languages, with smaller differences on en-de and en-es compared to en-it.",
"A plausible conclusion is that the cascade produces richer output, whose variety does not necessarily result in better translations nor is appreciated by humans.",
"Indeed, annotators were able to correctly distinguish the output only for en-de, where lexical diversity is similar (see 7.1).",
"There is a time when the possible transition from consolidated technological frameworks to new emerging paradigms depends on answering fundamental questions about their potential, strengths and weaknesses.",
"A time when technology developers are faced with the choice of where to direct their future investments.",
"Five years after its appearance on the scene, the direct approach to ST confronts the community with similar questions in relation to the traditional cascade paradigm that it aims to overtake.",
"Our investigation showed that, in spite of the known data paucity conditions still penalizing the direct approach, the two technologies now perform substantially on par.",
"Subtle differences in their behavior exist: overall performance being equal, the cascade still seems to have an edge in terms of morphology, word ordering and lexical diversity, which is balanced by the advantages of direct models in audio understanding and in capturing prosody.",
"However, they do not seem sufficient and consistent enough across languages to make the output of the two approaches easily distinguishable, nor to make one model preferable to the other.",
"Back to our title, they no longer make a difference.",
"We are aware that the generalizability of these results depends on several factors such as the considered languages, systems and benchmarks, as well as the human workforce deployed for the inquiry.",
"Here, with the help of professionals, we proposed multi-faceted quantitative and qualitative analyses, run on the output of state-of-the-art systems on three language pairs though, by now, covering only the most-explored and data-favorable condition, which has English as source.",
"Although our findings hold for a specific scenario, in which free data were at our disposal (and to which we contribute back by releasing high-quality post-edits), they might not be generalizable to other (e.g. dif-ficult, distant) languages and other (e.g. highly specialized) domains.",
"Nevertheless, we present them as a timely contribution towards answering a burning question within the ST community.",
"The creation of the post-edits used in this work was funded by the European Association for Machine Translation (EAMT) through its 2020 Sponsorship of Activities programme.",
"The computational costs were covered by the End-to-end Spoken Language Translation in Rich Data Conditions project, 12 which was financially supported by an Amazon AWS ML Grant."
] | [
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"other",
"other"
] |
[
"An audience's prior beliefs and morals are strong indicators of how likely they will be affected by a given argument.",
"Utilizing such knowledge can help focus on shared values to bring disagreeing parties towards agreement.",
"In argumentation technology, however, this is barely exploited so far.",
"This paper studies the feasibility of automatically generating morally framed arguments as well as their effect on different audiences.",
"Following the moral foundation theory, we propose a system that effectively generates arguments focusing on different morals.",
"In an in-depth user study, we ask liberals and conservatives to evaluate the impact of these arguments.",
"Our results suggest that, particularly when prior beliefs are challenged, an audience becomes more affected by morally framed arguments.",
"In the last years, more research has been dedicated to studying prior beliefs in argumentation.",
"Understanding the role of prior beliefs helps people craft more effective arguments when targeting a particular audience.",
"Accordingly, operationalizing knowledge about the audience as part of supporting tools for humans or realized in fully automated debating technologies (Slonim et al., 2021) could benefit the production of arguments that bridge the gap between disputed parties by focusing on the shared beliefs rather than divisive ones (Alshomary and Wachsmuth, 2021).",
"In social psychology, a body of research employs the notion of morals to understand people's judgments on controversial topics (Haidt, 2012; Fulgoni et al., 2016).",
"Feinberg and Willer (2015) demonstrated that arguments become more effective when they match the morals of the target audience.",
"Multiple works in computational linguistics have analyzed the persuasive effectiveness of arguments depending on the target audience (Durmus and Cardie, 2018; El Baff et al., 2020), showing that audience-based features reliably predict effectiveness.",
"Different proxies of beliefs have been proposed as part of this, ranging from interests and personality traits (Al Khatib et al., 2020) to stances on popular issues (Alshomary et al., 2021a).",
"The authors of the latter used the stances to control the generation of argumentative texts.",
"However, they did not assess the effectiveness of these texts on the audience, leaving the importance of encoding beliefs ultimately unclear.",
"Beyond that, little research has been done on generating arguments tailored towards a specific audience, let alone on the importance of morals in achieving agreement.",
"This work studies the feasibility of generating morally framed arguments computationally and the effect of these arguments on different audiences.",
"For this, we rely on the moral foundation theory (Haidt, 2012) that projects the moral system into five foundations: care, fairness, loyalty, authority, and purity.",
"To produce arguments that address specific morals, we extend the capabilities of Project Debater (Slonim et al., 2021).",
"1 Project Debater includes a hybrid approach of multiple components designed to generate arguments of high quality that compete with human arguments.",
"Building on this technology helps us focus on evaluating the impact of morally framed arguments, as it ensures a certain base quality level in the generation.",
"In particular, our proposed extended system takes as input a controversial topic, a stance, and a set of morals.",
"It retrieves a set of argumentative texts, filters the ones conveying the input morals, and finally phrases an argument holding the given stance on the given topic focusing on the given morals.",
"To identify morals in texts, we rely on distant supervision: We use the Reddit dataset of Schiller et al. (2020), which contains argumentative texts with annotated aspects, along with the 1 Available via an API under: https://early-access-program.",
"debater.res.ibm.com/ 8782 moral-to-concept lexicon of Hulpus et al. (2020) for the automatic mapping from aspects to morals.",
"Then, we train a BERT-based classifier, achieving high performance on the moral dataset of Kobbe et al. (2020) compared to ablation baselines.",
"To assess the effect of morally framed arguments on a particular audience, we consider liberals and conservatives as alternative audiences.",
"We study the question of whether morally loaded arguments are more effective on a specific audience.",
"Additionally, we investigate whether differently-framed moral arguments (targeting different morals) vary in their effect on liberals and conservatives.",
"In line with El Baff et al. (2018), we design a user study to answer both questions.",
"Our study separately asks three liberals and three conservatives to rank different arguments on specific controversial issues based on how effective they were in challenging or empowering the audience's stance.",
"The results suggest that, when arguments challenge the stance, the morally framed ones are generally more effective, especially for the conservative audience.",
"We find that liberals value arguments that focus on their own morals (care and fairness) the most, especially when their stance is challenged.",
"Although conservatives also value respective arguments, a focus on typically conservative morals (loyalty, authority, and purity) becomes more relevant to them when their stance is challenged.",
"Despite the limited size of our user study, these findings hint at the importance of utilizing morals to craft more effective arguments.",
"The code, the trained model, and the data are made publicly available.",
"2 The contributions of this work can be summarized as follows: A state-of-the-art model for mining statements with defined morals from argumentative texts.",
"A system, based on Project Debater, for generating morally-framed arguments.",
"A user-study giving empirical evidence for the impact of morally-framed arguments on audiences of different political ideologies.",
"Prior Beliefs in Argumentation Studying persuasiveness in argumentation is an active field of research.",
"Besides planning argument content, 2 https://github.com/webis-de/ACL-22 structure, and style (Wachsmuth et al., 2018), researchers also considered audience-based features to model persuasiveness.",
"Among these, both Durmus and Cardie (2018) and El Baff et al. (2020) study how user factors such as religion and political background affect persuasiveness.",
"Al Khatib et al. (2020) demonstrated that user-based features reflecting beliefs, characteristics, and personality could increase the predictability of argument persuasiveness.",
"Moreover, Lukin et al. (2017) demonstrated that persuasiveness correlates with users' personality traits.",
"The moral foundation theory offers another conceptual model of understanding human judgments in daily life (Haidt and Joseph, 2004).",
"According to the theory, humans subconsciously adhere to five basic moral foundations when judging controversial issues: fairness (importance of justice, rights, and equality), care (being kind, and avoiding harm), loyalty (self-sacrifice, solidarity, belongingness), authority (respect to traditions and hierarchy), and purity (sacredness of religion and human).",
"Based on this theory, the disagreement between liberals and conservatives can be explained by the moral gap between the two parties.",
"While liberals rely mainly on care and fairness (so-called individualizing morals ) in their assessment of controversial issues, conservatives consider all moral foundations more evenly, somewhat skewed towards the binding morals though, that is, loyalty, authority, and purity (Graham et al., 2009).",
"Several studies provided evidence of the robustness of the moral foundation theory in understanding people's behaviors and decisions (Feinberg and Willer, 2015; Fulgoni et al., 2016; Johnson and Goldwasser, 2018).",
"For computational purposes, Kobbe et al. (2020) annotated moral foundations in arguments and analyzed the correlation between morals and argument quality.",
"We use their corpus in our experiments to evaluate our moral classifier.",
"Argument Generation Argument generation approaches have been proposed for a spectrum of different tasks.",
"Hua et al. (2019) and Alshomary et al. (2021b) worked on counter-argument generation, tackling the task by rebutting an argument's conclusion or by undermining one of its weak premises, respectively.",
"Others opposed a given claim (Bilu et al., 2015; Hidey and McKeown, 2019), generated argumentative claims on a given topic controlled for certain aspects (Schiller et al., 2020), or reconstructed implicit conclusions (Alshomary 8783 Marijuana Gun Death Minimum Nuclear School Moral Legalization Control Abortion Penalty Wage Energy Cloning Uniforms Care 14% 31% 19% 13% 16% 32% 20% 10% Fairness 13% 26% 28% 22% 23% 9% 13% 16% Loyalty 9% 13% 14% 21% 34% 20% 24% 38% Authority 54% 25% 21% 7% 8% 2% 25% 8% Purity 10% 5% 17% 36% 19% 37% 17% 28% Table 1: Distribution of the five moral foundations across the eight topics in the constructed Reddit dataset. The topics Cloning and School Uniforms are used for validation, all others for training. et al., 2020; Syed et al., 2021).",
"Recently, Alshomary et al. (2021a) introduced the task of belief-based claim generation, aiming to generate claims on a given topic that match a given target audience.",
"As a model of beliefs, they considered people's stances on big issues.",
"While they demonstrated the feasibility of encoding beliefs into claims, they did not study the effectiveness of these claims on an audience.",
"Another driver in the field of argument generation is Project Debater by Slonim et al. (2021).",
"Project Debater is an end-to-end system for argument mining, retrieval, and generation.",
"The system relies on a hybrid approach consisting of retrieval, mining, clustering, and rephrasing components.",
"In their manual evaluation, Slonim et al. (2021) observe a competent quality of generated arguments compared to those crafted by humans.",
"To generate morally framed arguments, we extend their system.",
"Relying on Project Debater helps us alleviate confounding effects of argument quality and, so, to focus on testing whether moral utilization affects the audience.",
"Our proposed system relies on the ability to identify morals in arguments automatically.",
"Existing approaches to mining morals from texts are either lexicon-based or machine learning-based.",
"A number of datasets with morals have been constructed for domains such as social media or news articles.",
"For argumentative texts, Kobbe et al. (2020) manually annotated a small dataset of 220 arguments, which is only suitable for evaluation.",
"We, therefore, decided to develop a moral foundation classifier based on data collected automatically using distant supervision, as explained in the following.",
"To circumvent the need for annotated data, we construct a training dataset following a distant-supervision approach.",
"In particular, moral foundations are revealed as aspects of concerns in discussions of controversial topics.",
"For example, when discussing School Uniform from the authority perspective, aspects such as respect and obedience often arise.",
"Given this observation, we referred to the dataset of Schiller et al. (2020) which contains short argumentative texts on eight topics along with aspects annotated automatically for each text.",
"We then assigned each text a set of moral foundations based on the aspects appearing in the text.",
"To map aspects to moral foundations, we employed the lexicon of Hulpus et al. (2020) which connects moral foundations to Wikipedia concepts.",
"After filtering out arguments without any mapping and balancing the data across the five moral foundations, this resulted in a dataset with 230k argumentative texts and the corresponding morals.",
"We split the dataset into six topics for training and two for validation (testing will happen on other data below).",
"Details on the distribution of the morals across topics are found in Table 1. To assess the quality of the distantly supervised dataset, two authors of the paper manually evaluated the correctness of the assigned morals on a sample of 100 examples.",
"77% of the cases were considered correct by at least one author, 44% by both.",
"The Cohen's agreement was 0.32, which is not high, but in line with other subjective argument-related annotations (El Baff et al., 2018).",
"Table 2 shows example sentences with assigned morals from the dataset.",
"We rely on a BERT-based classifier to identify morals in texts (Devlin et al., 2019), starting from the pre-trained bert-based-cased model.",
"We fine-tuned the model on our training set for three epochs 8784 Argumentative Sentence Moral Abortion isn't murder because abortion is legal and murder is an illegal killing of another person.",
"with a batch size of 16 and a learning rate of 3 e 5 .",
"In the training phase, the input was an argumentative sentence and the corresponding moral foundation.",
"Since an argument may contain multiple sentences, each reflecting a specific moral, an argument's final set of morals consists of all sentences' morals predicted with confidence above 0.5.",
"To assess the classifier's effectiveness more reliably, we trained six models on different random samples of size 50k and computed their average F 1 -score.",
"For comparison, we consider two baselines.",
"The first is the model performing best in the experiments of Kobbe et al. (2020), which is a multi-label BERT-based model trained on the Twitter moral corpus of Hoover et al. (2020).",
"We trained our own version on the same dataset and referred to it as mBERT .",
"The second baseline is a simple lexicon-based approach that computes the frequency of words belonging to each of the moral foundations (Araque et al., 2020), called Lexicon below.",
"As a lexicon, we used the moralstrength library.",
"3 .",
"We tested all models on the dataset of Kobbe et al. (2020), which consists of 220 arguments annotated for moral foundations by two annotators.",
"Table 3 shows the F 1 -score of all evaluated models for each moral as well as the macro F 1 -score.",
"Additionally, we show the precision and recall for each approach.",
"In terms of F 1 -score, our approach outperforms both baselines across three of the five moral foundations as well as on average.",
"We observe that effectiveness varies in terms of precision and recall between the Lexicon and the mBERT 3 Link: https://github.com/oaraque/moral-foundations ...",
"baseline.",
"The stable effectiveness of our approach across the five morals signals the advantage of the proposed dataset that we used in our approach.",
"Hence, we use this model later for morally framed argument generation in our system.",
"Table 4 shows two example arguments with the manually annotated morals along with the ones predicted by the baselines and by our approach.",
"We see that the Lexicon baseline assigns all morals to each argument most of the time, leading to the high recall across all morals.",
"The first row of the table shows an example argument from the test set in which our approach was able to detect its authority moral while mBERT failed.",
"In the second row, our approach missed the care moral in the argument but highlighted loyalty , a moral that probably emerges from the aspect of helping each other.",
"This section describes the system that we developed to study the effect of morally framed arguments on the audience.",
"Our design extends the capabilities available via the Project Debater API with the moral foundation classifier from Section 3. As input, it takes a controversial topic (say, glob-alization), a stance on the topic (say, pro ), and a set of morals to be targeted (say, loyalty , authority , and purity ).",
"Then, it determines a collection of claims and evidence on the given topic that focus 8785 Care Fairness Loyalty Authority Purity Macro Approach Pre Rec F 1 Pre Rec F 1 Pre Rec F 1 Pre Rec F 1 Pre Rec F 1 Pre Rec F 1 Lexicon 0.64 0.88 0.60 0.07 0.70 0.13 0.09 0.86 0.17 0.14 0.63 0.23 0.16 0.72 0.27 0.18 0.76 0.28 mBERT 0.74 0.38 0.50 0.47 0.35 0.40 0.50 0.10 0.16 0.43 0.09 0.14 0.56 0.13 0.21 0.54 0.21 0.28 Ours 0.54 0.56 0.52 0.31 0.55 0.37 0.21 0.54 0.28 0.23 0.74 0.34 0.46 0.48 0.46 0.35 0.57 0.40 Table 3: Moral foundation classification: Precision (Pre), recall (Rec), and F 1 -score (F 1 ) of our approach and the baselines for each moral foundation as well as the macro averages.",
"on the given morals, from which it constructs an argument with the given stance.",
"Figure 1 shows the high-level process of the proposed system.",
"We describe the three-step process in detail in the following, highlighting our proposed integration.",
"First, the system retrieves a collection of argumentative sentences discussing the controversial topic from Project Debater's index, which contains 400 million news articles.",
"The articles are split into sentences and indexed along with several meta-annotations.",
"We generate several queries containing only the topic keywords without any topic expansion to focus on relevant sentences.",
"We restrict the retrieved sentences to only those annotated as having sentiment or causality markers.",
"Section 5 gives more details on the constructed queries.",
"Second, the trained classifier is used to annotate each argumentative sentence for all likely moral foundations.",
"It then filters out those sentences that either does not have any moral or contain at least one moral not given as input.",
"Next, through Project Debater's API, the system generates for each of the remaining sentences a likelihood score reflecting whether it contains a claim or evidence following the approach of Ein-Dor et al. (2020).",
"We instruct the API to keep only sentences having a claim with likelihood higher than claim_threshold or evidence with likelihood higher than evidence_threshold (the exact thresholds are given in Section 5).",
"Additionally, the API identifies claim boundaries for sentences containing claims and extracts the exact span of text containing the claim.",
"Third, our proposed extension aggregates the given list of claims and evidence sentences with the input topic and stance.",
"It then uses Project Debater's narrative generation API to generate the final argument.",
"The narrative generation identifies the stance of claims and evidence towards the topic according to the approach of Bar-Haim et al. (2017).",
"Only those matching the input stance are kept.",
"Redundant elements are then filtered out, and the remaining ones are grouped into thematic clusters, where a theme is a Wikipedia title (Slonim et al., 2005).",
"The process of building these clusters also includes extracting one claim that represents the theme.",
"Each theme will then be represented by a paragraph in the output argument.",
"Finally, a set of algorithms is used to perform various kinds of re-phrasing on the argument level (e.g., pronoun resolution) and on the paragraph level (e.g., ensuring that different arguments are put together) (Slonim et al., 2021).",
"Example arguments with and without controlled morals are shown in Table 5.",
"By concept, each argument starts with an introductory paragraph listing the main themes of discussion, followed by a set of paragraphs, each combining claims and evidence on one theme.",
"Binding argument: The crowd raised four issues, explaining its views.",
"The first claim is that globalization is reducing the importance of nation-states.",
"The next issue will show how Globalization and structural forces aggravate poverty.",
"In addition, we will hear about pollution and Culture.",
"Lastly, Culture.",
"Globalization has destabilized previously immutable social institutions, shifting cultural value away from old traditions to new more individualistic and market friendly ideas.",
"It is often said to have a negative effect on the world's cultural diversity.",
"Cultural and geographical dimensions of transformational leadership become blurred as globalization renders ethnically specific collectivist and individualistic effects of organizational behavior obsolete in a more diversified workplace.",
"Individualizing argument: The crowd raised four issues, explaining its views.",
"The first claim is that Globalization on its own cannot end gender inequality.",
"In addition, we will hear about harm, economy and processes.",
"Starting with gender inequality.",
"There are various studies available that depict globalization as a hindrance toward gender inequality.",
"Globalization on its own cannot end gender inequality.Turningto harm.",
"Globalization is a threat to culture and religion, and it harms indigenous people groups while multinational corporations profit from it.",
"It has been criticized for benefiting those who are already large and in power at the risk and growing vulnerability of the countries' indigenous population.",
"Uncontrolled argument: The crowd raised four issues, explaining its views.",
"The first claim is that globalisation creates economic and cultural imbalances in developing nations.",
"The next issue will show how globalization is reducing the importance of nation-states.",
"And the third point is that globalization is a threat.",
"In addition, we will hear about processes.",
"Starting with economy.",
"Globalization does not work for all the economies that it affects, and that it does not always deliver the economic growth that is expected of it.",
"Globalisation and neoliberalism have exacerbated already unequal economic relations.",
"Although globalization takes similar steps in most countries, scholars such as Hodge claim that it might not be effective to certain countries and that globalization has actually moved some countries backward instead of developing them.",
"Table 5: Example generated arguments against Globalization for different focused morals.",
"To evaluate our hypothesis on the effect of morally framed arguments, we carried out a user study with two opposing target audiences, liberals and conservatives .",
"Our primary goal was to investigate whether morally framed arguments are more effective than uncontrolled ones.",
"Additionally, we sought to determine whether differently-framed arguments affect liberals and conservatives differently.",
"In the following, we report on this study.",
"Arguments We considered ten popular topics from the website debate.org, called big issues there.",
"For each topic, we used our system to construct three arguments: one argument focusing on care and fairness ( individualizing ), one focusing on loyalty, authority, and purity ( binding ), and one baseline argument where we did not control the morals targeted ( uncontrolled ).",
"We created arguments separately for both stances (pro and con), resulting in a total of 10 3 2 = 60 arguments.",
"To construct each argument, we used the following parameters.",
"For each topic, we built four queries, retrieving 10k sentences with 6 to 60 tokens per query.",
"The first query retrieved sentences containing the topic.",
"The second and third query targeted claim-like sentences, requiring the occurrence of",
"(a) at least one causality marker or",
"(b) both causality and a sentiment marker.",
"Each needed to appear together with the topic in a window of 12 tokens.",
"The last query aimed to retrieve evidence by filtering only those sentences that contained any of the following tokens: surveys, analyses, re-searches, reports, research, and survey.",
"A moral was assigned to a retrieved sentence if the probability of our classifier was higher than 0.5.",
"After initial tests, we set the claim_threshold and evidence_threshold to 0.8 and 0.6 respectively.",
"We left all other settings to the default values of Project Debater's API.",
"Internal Study on Argument Quality Before we launched our main study, two authors of this paper manually assessed the quality of the generated arguments and the morals addressed in each.",
"In particular, each of them read all 60 arguments and ranked their relevance , coherence , and argumentativeness on a 5-point Likert scale.",
"While reading each argument, they also highlighted spans of text that they found to reflect a specific moral.",
"External Study on Argument Effectiveness To answer our research questions, we conducted a two-phase user study on the platform Upwork : First, we determined the political ideology of each participant, and then, we let selected participants rank the different arguments.",
"In the first phase, we asked people living in the US that are experienced in writing and content editing to perform the Political Typology Quiz , available through the Pew Research Center, in order to 8787 Type Argumentativeness Relevance Coherence Binding arguments 3.8 3.8 4.0 Individualizing arguments 4.2 4.0 3.9 Uncontrolled arguments 4.1 4.1 3.9 Table 6: Mean quality scores of the three types of evaluated arguments on a 5-point scale (higher is better).",
"identify their political ideology.",
"4 In 17 questions, the quiz asks participants to state their views on controversial issues in the US.",
"The test results place the participants on a spectrum of ideologies from solid liberal (left) to core conservative (right).",
"In the second phase, we chose only six participants from the first phase due to budget constraints, three solid liberals (one male, two female) and three core conservatives (two males, one female).",
"We showed each of them three arguments (one individualizing, one binding, one uncontrolled) for all 20 topic-stance pairs.",
"For each pair, the participants read the three arguments and ranked them by perceived effectiveness .",
"We followed El Baff et al. (2018), defining the effectiveness of an argument either by how empowering it is (if the participant has the same stance on the topic) or by how challenging it is (otherwise).",
"For this purpose, the participants self-assessed their stances on each topic on a 5-point Likert scale, from 1 (strongly disagree) to 5 (strongly support) before reading the arguments.",
"5 5.2 Results In the following, we present the results of both studies, attempting to answer our research questions.",
"Argument Quality Table 6 presents the quality scores for each argument type and Table 7 the distribution of moral foundations.",
"Comparing the scores of binding and individualizing arguments to the uncontrolled ones, we see that our method did not notably worsen the quality of the generated arguments.",
"The moral foundation distribution 4 Political Typology Quiz: https://www.pewresearch.org/ politics/quiz/political-typology/ 5 Given an estimated workload of 3 to 3.5 hours, we paid each participant a fixed rate of $75.",
"indicates that binding arguments have a relatively higher focus on loyalty, authority, and purity than the individualizing arguments and a lower focus on fairness and care.",
"This supports the impact of our method on controlling morals in arguments.",
"Empowering vs. Challenging Figure 2 shows the distribution of challenging and empowering arguments.",
"Liberals were more decisive with their stance on the given topics, with 73% being on the pro side, whereas only 30% of the conservatives were on that side (50% con side, 20% no stance).",
"Since we presented arguments for both sides for each topic, we had an equal distribution of empowering and challenging arguments for the liberals.",
"However, for conservatives, we had 40% empowering and 40% challenging arguments due to the 20% undecided cases.",
"Since arguments supporting one side of a debate are rather challenging for the undecided audience, in our analysis below, we consider the 20% undecided cases to be challenging.",
"Effectiveness of Moral Arguments Table 8 shows the rank distribution for morally-framed arguments ( binding and individualizing ) compared to the uncontrolled ones for liberals, conservatives, and all together.",
"In general, the participants ranked the arguments framed in terms of fairness and care ( individualizing ) significantly better than the uncontrolled ones, with an average rank of 1.72 compared to 2.08.",
"6 This difference is significant at p < 0 .",
"05 using student t -test.",
"This signals a positive answer to our first research question: a focus on morals can make arguments more effective.",
"A closer look at the distribution of arguments at Rank 1 shows that conservatives were more susceptible to moral arguments (75% binding and individualizing) compared to liberals (63%).",
"Next, we examine whether arguments with different morals affect liberals and conservatives differently by looking at the achieved ranks of both empowering and challenging arguments.",
"Effectiveness depending on Ideology Looking at the mean ranks assigned by liberals in Table 9, we observe that challenging arguments that focus on individualizing morals (care and fairness) are most effective.",
"We validate that the difference is significant for p < 0 .",
"1 using student t -test.",
"This is in line with Feinberg and Willer (2015) who found that arguments framed in terms of liberal morals were more convincing to liberals.",
"Notably, this effectiveness slightly decreases when arguments are empowering.",
"A reasonable hypothesis is that, in the case of empowering arguments, the audience may 6 The difference in mean ranks is 0.36, 95% CI [0.17, 0.57].",
"be more interested in the opposing views, which might be covered by uncontrolled arguments.",
"We investigate this hypothesis further via a follow-up questionnaire below.",
"Now, we look at the conservatives.",
"Although they also valued the individual arguments the most, we observe that, when arguments challenged their views, a focus on binding morals (loyalty, authority, and purity) became slightly more effective than the uncontrolled arguments.",
"Generally, morally framed arguments that challenged the views of conservatives were significantly more effective than uncontrolled ones at p < 0 .",
"1 using the student t -test.",
"Agreement across Ideologies We measured inter-annotator agreement between the participants using Kendall's W (Kendall and Smith, 1939).",
"The agreement of all six participants was 0.29.",
"In contrast, when considering liberals and conservatives separately, it increased to 0.35 for liberals and 0.51 for conservatives.",
"This indicates higher agreement between participants having similar political ideology and matches the common notion that conservatives are more unified in their views than liberals.",
"Reasons behind Effectiveness Judgments In a follow-up questionnaire, we investigated our par-ticipants' judgments.",
"We asked them to self-assess whether they prefer (1) arguments with knowledge they have or are not familiar with, (2) arguments that matched or challenged their own views , (3) arguments that convince others who share or oppose their views , and (4) what affected the judgments of argument effectiveness more: knowledge or views, each in empowering and challenging cases.",
"Table 10 shows that the participants ranked knowledge as the most relevant effectiveness aspect.",
"In terms of others' views , the majority valued arguments that focus on the opposing views, whereas preferences differ for empowering and challenging arguments on own views .",
"Due to the reliability issues of self-assessment of one's moral judgments (Pizarro, 2000), we acknowledge the limitation of this study, though.",
"We present details on the questionnaire and its results in the appendix.",
"We explicitly acknowledge the limited number of liberals and conservatives that we recruited in our final user study, who might not represent the whole population.",
"Since the low sample size affects the 8789 Empowering Challenging All Knowledge Know about 33.3% 0.0% 16.7% Not familiar 66.7% 83.3% 75.0% Neither 0.0% 16.7% 8.3% Own views Matched 50.0% 16.7% 33.3% Challenging 33.3% 50.0% 41.7% Neither 16.7% 33.37% 25.0% Others' views Share view 16.7% 0.0% 8.3% Oppose view 66.7% 66.7% 66.7% Neither 16.7% 33.3% 25.0% Effectiveness Knowledge 83.3% 66.7% 75.0% Views 16.7% 33.3% 25.0% Neither 0.0% 0.0% 0.0% Table 10: Distribution of preferences (options) selected by the annotators for each of the four asked questions for empowering and challenging cases.",
"reliability of our results, we performed significance tests to report our main results with a certain confidence level.",
"As far as budget permits, further studies should be run with a more significant sample to reassess the reliability of the results.",
"Despite the limitation above, the results of our evaluation indicate that arguments targeting the moral foundations of care and fairness are more effective than others, at least in the tested sample.",
"Also, we observed that focusing on the audience's views in challenging arguments makes them more effective.",
"Regarding political ideologies, while our results match the literature in that liberals were affected more by arguments focusing on their views (care and fairness), especially when challenged, conservatives did not rank binding arguments higher than individualizing ones.",
"However, the conservatives still showed a higher tendency to be affected by morally framed arguments than liberals.",
"Our approach is limited by its capability to retrieve argumentative texts that discuss the topic from different moral perspectives.",
"In the retrieval component, we have focused on obtaining sentences containing precisely the addressed topic.",
"A more elaborated approach could broaden the search to relevant topics through topic expansion.",
"Further, we could use Project Debater only through the predefined API, restricting the way we integrate the moral tagging of sentences.",
"Ideally, it would be performed during indexing; then, choosing the morals to focus on could be defined as part of the queries.",
"Also, it is noteworthy that our moral classifier is trained on an automatically annotated dataset which may have limited its performance.",
"Still, we demonstrated that it is feasible to tune arguments automatically to target certain morals and that such arguments tend to be more effective.",
"Moreover, our results indicate that differently framed arguments have a different effect on different audiences.",
"This opens opportunities for generating more effective arguments when targeting a specific audience, bridging the gap between disagreeing parties by focusing on the shared beliefs.",
"In this work, we have proposed an extension of Project Debater that generates morally framed arguments.",
"Due to the lack of training data for classifying morals, we have used distant supervision to col-lect a large set of argumentative sentences automatically annotated for moral foundations.",
"Training a BERT-based classifier on the data yielded state-of-the-art results.",
"We have integrated the classifier with functionalities of Project Debater to tune arguments tuned towards specific morals.",
"According to our user study, arguments with morals relevant to liberals are more effective than arguments without any control of morals.",
"Also, conservatives were more affected by moral arguments.",
"Our results demonstrate the feasibility of generating morally framed arguments.",
"By focusing on shared morals, we believe that a respective system helps bridge disagreement of opposing audiences in practice.",
"This work was funded by the Deutsche Forschungs-gemeinschaft (DFG, German Research Founda-tion): TRR 318/1 2021 438445824.",
"We would also like to thank the reviewers and the participants who took part anonymously in our user study.",
"The responsible administrative board of Paderborn University formally approved our external study.",
"We did not gather any personal information about the participants that could connect their ideologies to their identity.",
"Accordingly, no sensitive information was sent via the Project Debater API.",
"Also, we ensured that they get paid more than the minimum wage in the U.S., namely 75$ for a workload of 3 to 3.5 hours.",
"Once more, we would like to indicate here that the results of this paper should be considered preliminary due to the limited number of surveyed users.",
"Budget constraints did not allow to increase this number.",
"Working on technologies that aim to persuade an audience raises ethical concerns.",
"Using knowledge about a user's morals is a critical act, and if done, it should be clearly communicated to the users.",
"Similarly, generating arguments aiming to convince the audience could be thought of as a manipulation attempt.",
"To avoid manipulation, any technology aiming at convincing the audience should be transparent about it by keeping the user informed of how their information is being used.",
"Since we rely on the arguments provided by the Project Debater API, we ultimately cannot control their content, but our internal quality assessment study did not raise any notable concerns on the information contained.",
"As mentioned, the long-term goal of our envisioned system is not to use the audience's morals to generate arguments that convince them.",
"Rather, given two disagreeing parties, the goal is to generate a wider spectrum of arguments covering relevant morals for both parties.",
"We have argued in this paper that such morally rich arguments are more effective and could better achieve agreement between the disagreeing parties."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"other",
"abstain",
"result",
"method",
"method",
"objective",
"method",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"method",
"result",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain"
] |
[
"Cross-modal language generation tasks such as image captioning are directly hurt in their ability to support non-English languages by the trend of data-hungry models combined with the lack of non-English annotations.",
"We investigate potential solutions for combining existing language-generation annotations in English with translation capabilities in order to create solutions at web-scale in both domain and language coverage.",
"We describe an approach called Pivot-Language Generation Stabilization (PLuGS), which leverages directly at training time both existing English annotations (gold data) as well as their machine-translated versions (silver data); at run-time, it generates first an English caption and then a corresponding target-language caption.",
"We show that PLuGS models outperform other candidate solutions in evaluations performed over 5 different target languages, under a large-domain testset using images from the Open Images dataset.",
"Furthermore, we find an interesting effect where the English captions generated by the PLuGS models are better than the captions generated by the original, monolingual English model.",
"Data hungry state-of-the-art neural models for language generation have the undesired potential to widen the quality gap between English and non-English languages, given the scarcity of non-English labeled data.",
"One notable exception is machine translation, which benefits from large amounts of bilingually or multilingually annotated data.",
"But cross-modal language generation tasks, such as automatic image captioning, tend to be directly hurt by this trend: existing datasets such as Flickr (Young et al., 2014a), MSCOCO (Lin et al., 2014), and Conceptual Captions (Sharma et al., 2018) have extensive labeled data for English, but labeled data is extremely scarce in other languages (Elliott et al., 2016) (at 2 orders of magnitude less for a couple of languages, and none for the rest).",
"In this paper, we conduct a study aimed at answering the following question: given a large annotated web-scale dataset such as Conceptual Captions (Sharma et al., 2018) in one language, and a baseline machine translation system, what is the optimal way to scale a cross-modality language generation system to new languages at web-scale?",
"We focus our study on the task of automatic image captioning, as a representative for cross-modal language generation where back-and-forth consistency cannot be leveraged in a straightforward manner 1 .",
"In this framework, we proceed to test several possible solutions, as follows:",
"(a) leverage existing English (En) image captioning datasets to train a model that generates En captions, which are then translated into a target language X; we call this approach Train-Generate-Translate (TGT);",
"(b) leverage existing En captioning datasets and translation capabilities to first translate the data into the target language X, and then train a model that generates X -language captions; we call this approach Translate-Train-Generate (TTG);",
"(c) stabilize the TTG approach by directly using the En gold data along with the translated training data in the X language (silver data) to train a model that first generates En captions (conditioned on the image), and then generates X -language captions (conditioned on the image and the generated En caption); this approach has En acting as a pivot language between the input modality and the X language output text, stabilizing against and reduc-1 We chose to focus on the cross-modality version of this problem because for the text-only modality the problem is less severe (due to existing parallel data) and also more studied (Artetxe et al., 2018), as it is amenable to exploiting backand-forth consistency as a powerful learning signal.",
"ing potential translation noise.",
"We call the latter the Pivot-Language Generation Stabilization (PLuGS) approach.",
"Examples of outputs produced by these three solutions are shown in Fig. 1.",
"We perform extensive evaluations across five different languages (French, Italian, German, Spanish, Hindi) to compare these three approaches.",
"The results indicate that the bilingual PLuGS models consistently perform the best in terms of captioning accuracy.",
"Since there is very little support in the literature regarding the ability of standard evaluation metrics like BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), METEOR (Banerjee and Lavie, 2005), CIDEr (Vedantam et al., 2015), and SPICE (Anderson et al., 2016) to accurately measure captioning accuracy for non-English languages, our evaluations are done using fine-grained, side-by-side human evaluations using paid raters; we explain the evaluation protocol in detail in Sec. 5.",
"Besides the evaluations on bilingual PLuGS models, we also train and evaluate a multilingual PLuGS model, in which all five non-English languages considered are supported through a single model capable of generating outputs in all 5 languages.",
"The results indicate that similar languages are reinforcing each other in the common representation space, showing quantitative gains for the Romance languages involved in our experiments.",
"A related but perhaps less expected result is that the English captions generated by PLuGS models (what we call the Stablizer outputs) are better, as measured using side-by-side human evaluations, than captions generated by the original, monolingual English model.",
"There is a final additional advantage to having PLuGS models as a solution: in real-world applications of image captioning, quality estimation of the resulting captions is an important component that has recently received attention (Levinboim et al., 2019).",
"Again, labeled data for quality-estimation (QE) is only available for English 2 , and generating it separately for other languages of interest is expensive, time-consuming, and scales poorly.",
"The TGT approach could directly apply a QE model at run-time on the En caption, but the subsequent translation step would need to be perfect in order not to ruin the predicted quality score.",
"The TTG ap-2 https://github.com/google-research-datasets/Image-Caption-Quality-Dataset proach cannot make use at run-time of an En QE model without translating the caption back to English and thus again requiring perfect translation in order not to ruin the predicted quality score.",
"In contrast, the PLuGS approach appears to be best suited for leveraging an existing En QE model, due to the availability of the generated bilingual output that tends to maintain consistency between the generated EN& X-language outputs, with respect to accuracy; therefore, directly applying an English QE model appears to be the most appropriate scalable solution.",
"There is a large body of work in automatic image captioning for English, starting with early work (Hodosh et al., 2013; Donahue et al., 2014; Karpathy and Fei-Fei, 2015; Kiros et al., 2015; Xu et al., 2015) based on data offered by manually annotated datasets such as Flickr30K (Young et al., 2014b) and MS-COCO (Lin et al., 2014), and more recently with work using Transformer-based models (Sharma et al., 2018; Zhao et al., 2019; Changpinyo et al., 2019) based on the web-scale Conceptual Captions dataset (Sharma et al., 2018).",
"Generating image captions in languages other than English has been explored in the context of the WMT 2017-2018 multimodal translation sub-task on multilingual caption generation (Elliott et al., 2017).",
"The goal of the task is to generate image captions in German and French, using a small training corpus with images and captions available in English, German and French (based on Flickr30K).",
"In the context of that work, we use the results reported in (Caglayan et al., 2019) to quantitatively compare it against our approach.",
"Another relevant connection is with the work in (Jaffe, 2017), which explores several LSTM-based encoder-decoder models that generate captions in different languages.",
"The model most similar to our work is their Dual Attention model, which first generates an English caption, then an LSTM with attention over the image and the generated English caption produces a German caption.",
"Their quantitative evaluations do not find any additional benefits for this approach.",
"Our work is related to this idea, but there are key technical differences.",
"In the PLuGS approach, we train an end-to-end model based on a Transformer (Vaswani et al., 2017) decoder that exploits the generated English-prefix via the self-attention mechanism to learn to predict the non-English target caption, conditioned on the English tokens at multiple levels through the decoder stack.",
"Moreover, we approach this study as the search for a solution for web-scale multi-language image captioning: we employ the web-sized Conceptual Captions dataset for training, and consider the effects of using captions across multiple languages, as well as multi-language/single-model setups.",
"We model the output caption using a sequence-generation approach based on Transformer Networks (Vaswani et al., 2017).",
"The output is the sequence of sub-tokens comprising the target caption.",
"As shown in Fig. 2, the input sequence is obtained by concatenating the following features.",
"Global Image Embedding: We use a global image representation using the Graph-RISE model (Juan et al., 2019), a ResNet-101 model (He et al., 2016) trained for image classification at ultra-fine granularity levels.",
"This model produces a com-pact image embedding i of dimension D i = 64 .",
"This embedding is projected to match Transformer dimensions (set to 512 in most of our experiments) by a 2 layer DNN with linear activation and fed as the first element in the sequence of inputs to the encoder.",
"Object Labels Embeddings: Detecting the presence of certain objects in the image (e.g. woman, flag, laptop) can help generate more accurate captions, since a good caption should mention the more salient objects.",
"The object labels are generated by an object detection model which is run over the entire image.",
"The output labels are then converted to vectors using word embeddings to obtain what we call object-label embeddings.",
"More precisely, we detect object labels over the entire image using a ResNet-101 object-detection classifier trained on the JFT dataset (Hinton et al., 2015).",
"The classifier produces a list of detected object-label identifiers, sorted in decreasing order by the classifier's confidence score; we use the first sixteen of these identifiers.",
"The identifiers are then mapped to embeddings o j using an object-label embedding layer which is pre-trained to predict label co-occurrences in web documents, using a word2vec approach (Mikolov et al., 2013).",
"The resulting sequence of embeddings is denoted O = ( o 1 , . . . , o | O | ) , where each o j has dimension D o = DNN objects DNN image Image Object Classifier GlobalFeaturesExtractor Label Embeddings Trainable Pre-trained/fixed) Text Transformer Inputs LangId Vocab Langid DNN LangId Vocab text Embedding TransformerEncoder Decoder Outputs (Shifted) Splitter Stabilizer Caption Vocab text Encoder Outputs TransformerDecoder Embedding Encoder-decoder Attention Linear SoftMax Probs Beam Search Decoder Outputs Figure 2: The Transformer based PLuGS model.",
"256 .",
"Each member of this sequence of embeddings is projected to match Transformer dimensions by a 2 layer DNN with linear activation.",
"This sequence of projected object-label embeddings is fed to the encoder together with the global image embedding.",
"LangId Embeddings: When training language-aware models, we add as input the language of the target sequence.",
"We specify the language using a language identifier string such as en for English, de for German, etc.",
"We call this the LangId of the target sequence or target LangId in short.",
"Given the target LangId, we encode it using a LangId vocabulary, project it to match Transformer dimensions with a 2 layer DNN, then append it to the encoder input sequence.",
"Text Embeddings: All text (input or output) is encoded using byte-pair encoding (Sennrich et al., 2016) with a shared source-target vocabulary of about 4000 tokens, then embedded as described in (Vaswani et al., 2017), resulting in a sequence of text embeddings.",
"The embeddings dimensions are chosen to match the Transformer dimensions.",
"When performing the translation (MT) and multimodal translation (MMT) experiments in Sec. 6.1, the sequence of source text embeddings are fed to the encoder after the LangId embedding.",
"Additionally, we reserve a token-id in the text vocabulary for each language (e.g. (cid:104) de (cid:105) for German) for use as a separator in the PLuGS model output and also have a separate start-of-sequence token for each language.",
"Decoding: We decode with beam search with beam width 5.",
"PLuGS: For PLuGS models, in addition to the target caption we require the model to generate a ... car parked in the city < de > Encoder Outputs Decoder Layer 1 Encoder-Decoder Attention Masked Self-Attention Trainable Fixed Previous tokens Add & Normalize Voc Emb Voc Emb Voc Emb Voc Emb Voc Emb Voc Emb FF FF FF FF FF FF Add & Normalize Add & Normalize Decoder Layer k ... parked in the city < de > Auto ...",
"pivot-language (En) caption which we call the Stabilizer.",
"Specifically, we train the model over target sequences of the form Stabilizer + (cid:104) separator (cid:105) + Caption.",
"We use (cid:104) $LangId (cid:105) as the separator (i.e., for German captions we use (cid:104) de (cid:105) as the separator).",
"This approach has the advantage that it can be applied to multilingual models as well.",
"We subsequently split the model output based on the separator to obtain two strings: the Stabilizer and the Caption.",
"Note an important technical advantage here: as shown in Fig. 3, after initially generating the Stabilizer output, the Transformer decoder is capable of exploiting it directly via the self-attention mechanism, and learn to predict the non-English Caption tokens conditioned (via teacher-forcing) on the gold-data English tokens at multiple levels through the decoder stack, in addition to the cross-attention mechanism attending to the inputs.",
"As our results indicate, the models are capable of maintaining this advantage at run-time as well, when auto-regressive decoding is performed.",
"We perform our experiments using two different benchmarks.",
"We use the Multi30K (Elliott et al., 2016) dataset in order to compare the effect of the PLuGS model using a resource that has been widely used in the community.",
"We focus on Task 1 for French from (Caglayan et al., 2019), generating a translation in French based on an image and an English caption as input.",
"The training set consists of images from the Flickr30K train and validation splits, along with the corresponding French captions.",
"The validation split consists of test2016 images and captions, and the test split consists of the test2017 images and captions.",
"For the core results in this paper, we use the Conceptual Captions dataset (Sharma et al., 2018) as our English-annotated generation labels, in order to capture web-scale phenomena related to image captioning.",
"In addition, we use Google Translate as the translation engine (both for the run-time translations needed for the TGT approach and the training-time translations needed for the TTG and PLuGS approaches), targeting French, Italian, German, Spanish, and Hindi as target languages.",
"We use the standard training and validation splits from Conceptual Captions for developing our models.",
"We report the results using a set of 1,000 randomly samples images from the Open Images Dataset (Kuznetsova et al., 2018).",
"We refer to this test set as OID1k when reporting our results.",
"In the experiments done using the Multi30K dataset, we are reporting results using the METEOR (Banerjee and Lavie, 2005) metric, in line with previous work.",
"For the experiments performed using the Conceptual Captions dataset, we have found that automated evaluation metrics for image captioning such as BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), CIDEr (Vedantam et al., 2015), and SPICE (Anderson et al., 2016) cannot accurately measure captioning accuracy for non-English languages.",
"However, we are reporting CIDEr numbers as a point of comparison, and contrast these numbers with human evaluation results.",
"We describe the human evaluation framework we use next.",
"We perform side-by-side human evaluation for comparing model outputs.",
"To compare two image captioning models A (baseline) vs B , we generate captions for these images with each model and ask human raters to compare them.",
"As illustrated in Fig. 4, the raters are shown the image with the two captions randomly placed to the left vs. right, and are asked to compare the captions on a side-by-side rating scale.",
"In addition, they are asked to also provide an absolute rating for each caption.",
"The absolute rating provides a cross-check on the comparison.",
"Each image and associated captions are rated by three raters in our experiments.",
"We calculate the following statistics using the resulting side-by-side rating comparisons: W ins : Percent of images where majority of raters (i.e. 2 out of 3) marked Caption B as better (after derandomization).",
"Losses : Percent of images where majority of raters marked Caption A as better.",
"Gain sxs = W ins Losses We also calculate the following statistics using the resulting absolute ratings: A Accept = Percent of images where majority of raters mark caption A as Acceptable, Good, or Excellent.",
"B Accept = Percent of images where majority of raters mark caption B as Acceptable, Good, or Excellent.",
"Gain Accept = B Accept A Accept The advantages of the Gain sxs and Gain Accept metrics is that they are intuitive, i.e., they measure the absolute increase in accuracy between the two experimental conditions 3 3 Inter-rater agreement analysis shows that for each evaluation comparing two models, two of the three raters agree on Win/Loss/Same for 90% to 95% of the items.",
"Further, for more than 98% of the items using the difference between the absolute ratings gives the same Win/Loss/Same values as obtained from the side-by-side ratings.",
"Also, for 80% to 85% of the absolute ratings, two of the three raters agree on the rating.",
"Multi30K: For the experiments using this dataset, we use a Transformer Network (Vaswani et al., 2017) with 3 encoder and 3 decoder layers, 8 heads, and model dimension 512.",
"We use the Adam optimizer (Kingma and Ba, 2015), and do a hyperparameter search over learning rates { 3 e 4 , e 4 , 3 e 5 , e 5 } with linear warmup over 16000 steps followed by exponential decay over { 50 k, 100 k } steps.",
"We use 5 e 6 as the weight for L 2 regularization.",
"We train with a batch size of 1024, using a dropout of 0.3, on 8 TPU (You et al., 2019) cores.",
"Conceptual Captions: For all except large multilingual models, we use a vanilla Transformer with 6 encoder and decoder layers, 8 heads, and model dimension 512.",
"We use the SGD optimizer, and do a hyperparameter search over learning rates { 0 .",
"12 , 0 .",
"15 , 0 .",
"18 , 0 .",
"21 , 0 .",
"24 } with linear warmup over 16000 steps followed by exponential decay over { 350 k, 450 k } steps.",
"For multilingual models, we also use linear warmup over 80000 steps.",
"We use 1 e 5 as the weight for L 2 regularization.",
"We train with a batch size of 4096, using a dropout of 0.3 on 32 TPU (You et al., 2019) cores.",
"For large multilingual models, we use a Transformer with 10 encoder and decoder layers, 12 heads, and model dimension 768 4 We also use a smaller learning rate of 0.09.",
"In order to compare our work to related work we train our models on the Multi30K dataset and compared our results to the results in (Caglayan et al., 2019).",
"We focus on Task 1: generate a French translation based on an image and English caption as input.",
"Table 1 shows the results on the Multi30K dataset for Multimodal Translation.",
"Note that since (Caglayan et al., 2019) does not show numbers for the pure (no caption input) image captioning task, we show numbers for the D 4 condition, where only the first 4 tokens of the English caption are provided as input to the image captioning model.",
"We see that the PLuGS model is able to produce numbers for MT and MMT that are close to the baseline, even thought it is just an image captioning model augmented to handle these tasks.",
"For the D 4 task, which is the closest to image captioning, the PLuGS model shows improvement over the baseline.",
"Furthermore, the results contain preliminary indications that the PLuGS approach produces better results compared to the non-PLuGS approach Task Baseline non-PLuGS PLuGS MT 70.6 66.6 67.7 MMT 70.9 64.7 65.6 IC-D 4 32.3 30.6 32.8 Table 1: Multi30K test set METEOR scores for Translation (MT), Multi Modal Translation (MMT), and Image Captioning (IC-D 4 ).",
"In this section, we evaluate the performance of models trained using Conceptual Captions, as detailed in Sec. 4.",
"Table 2 presents the results on the OID1k testset for the SxS human evaluations between the TGT and PLuGS models (upper half), and between the TTG and PLuGS models (lower half).",
"The results show that, for all five languages, the PLuGS model captions are consistently supe-rior to the TGT captions on both Gain SxS and Gain Accept metrics.",
"The Gain SxS are between 3% and 5% absolute percentages between TGT and PLuGS models, and 1% and 3% absolute percentages between TTG and PLuGS models, with similar trends for the Gain Accept metric.",
"Table 3 presents the CIDEr scores on the validation set of the Conceptual Captions v1.1 (CC-1.1).",
"The CIDEr metric fails to capture any meaningful correlation between its scores and the results of the SxS human evaluations.",
"We further explore the hypothesis that adding more languages inside one single model may perform",
"even better, as a result of both translation noise canceling out and the languages reinforcing each other in a common representation space.",
"In this vein, we rename the bilingual version as PLuGS-2L, and train several additional models: a TTG-5L model, which uses a LangId token as input and uses for training all translated captions for all five languages and English; a TTGlarge-5L model, for which we simply increased the capacity of the Transformer network (see Sec. 5.2); and a PLuGS-5L model, which is trained using groundtruth labels that are concatenations (using the LangId token as separator) between golden groundtruth En labels and their translated versions, for all five target languages.",
"Results using CIDEr are shown in Table 4.",
"Across all languages, the TTG-5L models show a large gap in the CIDEr scores as compared to the TTG monolingual models.",
"Using more capacity in the TTGlarge-5L model closes the gap only slightly.",
"However, the effect of using pivot-language stabilizers tends to be consistently larger, in terms of CIDEr improvements, than the ones obtained by increasing the model capacity.",
"To accurately evaluate the impact of multi-linguality, we also perform SxS evaluations between the PLuGS-2L (as the base condition) vs. Lang TTG PLuGS-2L TTG-5L TTGlarge-5L PLuGS-5L Fr 0.7932 0.7820 0.6834 0.7064 0.7264 It 0.7760 0.7813 0.6538 0.6885 0.6978 De 0.6079 0.6170 0.4992 0.5367 0.5503 Es 0.7907 0.7854 0.7093 0.7203 0.7284 Hi 0.7149 0.7155 0.5891 0.6201 0.6641 Table 4: CIDEr scores on CC-1.1 validation set for bilingual and multilingual models.",
"PLuGS-5L (as the test condition) models, over three languages (French, German, and Hindi).",
"As shown in Table 5, the PLuGS-5L model performs better on French and Italian (3% and 4% better on Gain sxs ), while performing worse on Hindi compared to the bilingual PLuGS Hindi model (-0.2% on Gain sxs , -3.9% on Gain Accept ).",
"The results are encouraging, and indeed support the hypothesis that similar languages are reinforcing each other in the common representation space, explaining the gain observed for the Romance languages and the detrimental impact on Hindi.",
"We also note here that the human evaluation results, except for Hindi, come in direct contradiction to the CIDEr metric results, which indicate a large performance hit for PLuGS-5L vs. PLuGS-2L, across all languages.",
"This reflects again the extreme care needed when judging the outcome of such experiments based on the existing automatic metrics.",
"As already mentioned, the PLuGS models generate outputs of the form Stabilizer + (cid:104) LangId (cid:105) + Caption.",
"We therefore ask the following question: how does the quality of the Stabilizer output compare to the quality of captions produced by the baseline English model (that is, the same model whose captions are translated to the target languages in the TGT approach)?",
"We perform SxS human evaluations over Stabilizer captions (English) for three different PLuGS-2L models (trained for French, German, and Span-ish).",
"As shown in Table 6, the somewhat unexpected answer is that these Stabilizer outputs are consistently better, as English captions, compared to the ones produced by the original monolingual English captioning model.",
"The Gain sxs are between 5% and 6% absolute percentage improvements, while Gain Accept also improves up to 3.4% absolute for the PLuGS-Fr model.",
"We again note that the CIDEr metric is not able to correctly capture this trend, as shown by the results in Table 7, which indicate a flat/reverse trend.",
"So far, we have verified that both the target-language Caption and the Stabilizer English outputs for the PLuGS-2L models are better compared to the alternative ways of producing them.",
"Additionally, we want to check whether the Stabilizer and the target-language Caption are actually translations of each other, and not just independently good captions associated with the input image.",
"In Table 9, we show the BLEU-4 score of the translation of the Stabilizer output for the PLuGS-2L models, compared to the corresponding PLuGS-2L Caption treated as a reference, using the images in the OID1k test set.",
"The high BLEU scores are indeed confirming that the Caption outputs are close translations of the Stabilizer English outputs.",
"This allows us to conclude that PLuGS models are indeed performing the double-duty of captioning and translation.",
"Finally, we perform an experiment to understand the extent to which the quality of the Stabilizer outputs is correlated with the quality of the target-language Captions, so that a QE model (Levinboim et al., 2019) trained for English can be applied directly on PLuGS model outputs (more specifically,",
"on the Stabilizer outputs).",
"To that end, we perform human evaluations of stand-alone captions.",
"In this type of evaluation, the raters are shown an image along with a single caption, and are asked to provide an absolute rating for the caption on a 4-point scale.",
"As before, we define the metric Accept = Percent of images where majority of raters (2 of 3) marked Caption as Acceptable, Good or Excellent.",
"Since these ratings are obtained individually for captions, we can use them to measure cross-lingual quality correlations.",
"We use the stand-alone caption evaluation results to compute quality correlations.",
"Table 8 shows the correlation between the median human rating for the Stabilizer (English caption) vs Caption (target-language caption) for the PLuGS models considered.",
"We see that the correlation is much higher compared to the baselines, calculated by computing the correlation of the median rating for the Stabilizer vs Caption (target-language) generated by the TGT and TTG approaches.",
"En QE model, due to the availability of the generated Stabilizer output that tends to maintain consistency between the English and the target-language caption, with respect to content accuracy.",
"We present a cross-modal language generation approach called PLuGS, which successfully combines the availability of an existing gold annotation (usually in English) with the availability of translation engines that automatically produce silver-data annotations.",
"The result is a multilingual engine capable of generating high-quality outputs in the target languages, with no gold annotations needed for these languages.",
"We show that, for image captioning, the PLuGS approach out-performs other alternatives, while also providing the ability to pack multiple languages in a single model for increased performance.",
"Surprisingly, by considering the generated outputs in the original language of the annotation (Stabilizer outputs), we find that the quality of the Stabilizers is higher compared to the outputs of a model trained on the original annotated data.",
"Overall, our results can be understood as a successful instance of transfer learning from a unimodal task (text-to-text translation) to a cross-modal task (image-to-text generation), which allows us to indirectly leverage the abundance of text-only parallel data annotations across many languages to improve the quality of an annotation-poor cross-modal setup."
] | [
"abstain",
"objective",
"objective",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"method",
"abstain",
"result",
"result",
"result"
] |
[
"Pretraining deep language models has led to large performance gains in NLP.",
"Despite this success, Schick and Schutze (2020) recently showed that these models struggle to understand rare words.",
"For static word embeddings, this problem has been addressed by separately learning representations for rare words.",
"In this work, we transfer this idea to pretrained language models: We introduce BERTRAM , a powerful architecture based on BERT that is capable of inferring high-quality embeddings for rare words that are suitable as input representations for deep language models.",
"This is achieved by enabling the surface form and contexts of a word to interact with each other in a deep architecture.",
"Integrating BERTRAM into BERT leads to large performance increases due to improved representations of rare and medium frequency words on both a rare word probing task and three downstream tasks.",
"1 1 Introduction As word embedding algorithms (e.g. Mikolov et al., 2013) are known to struggle with rare words, several techniques for improving their representations have been proposed.",
"These approaches exploit either the contexts in which rare words occur (Lazari-dou et al., 2017; Herbelot and Baroni, 2017; Khodak et al., 2018; Liu et al., 2019a), their surface-form (Luong et al., 2013; Bojanowski et al., 2017; Pinter et al., 2017), or both (Schick and Schutze, 2019a,b; Hautte et al., 2019).",
"However, all of this prior work is designed for and evaluated on uncontextualized word embeddings.",
"Contextualized representations obtained from pretrained deep language models (e.g. Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Liu et al., 2019b) already handle rare words implicitly 1 Our implementation of BERTRAM is publicly available at https://github.com/timoschick/bertram .",
"using methods such as byte-pair encoding (Sen-nrich et al., 2016), WordPiece embeddings (Wu et al., 2016) and character-level CNNs (Baevski et al., 2019).",
"Nevertheless, Schick and Schutze (2020) recently showed that BERT's (Devlin et al., 2019) performance on a rare word probing task can be significantly improved by explicitly learning representations of rare words using Attentive Mimicking (AM) (Schick and Schutze, 2019a).",
"However, AM is limited in two important respects: For processing contexts, it uses a simple bag-of-words model, making poor use of the available information.",
"It combines form and context in a shallow fashion, preventing both input signals from interacting in a complex manner.",
"These limitations apply not only to AM, but to all previous work on obtaining representations for rare words by leveraging form and context.",
"While using bag-of-words models is a reasonable choice for static embeddings, which are often themselves bag-of-words (e.g. Mikolov et al., 2013; Bojanowski et al., 2017), it stands to reason that they are not the best choice to generate input representations for position-aware, deep language models.",
"To overcome these limitations, we introduce BERTRAM ( BERT fo r A ttentive M imicking), a novel architecture for learning rare word representations that combines a pretrained BERT model with AM.",
"As shown in Figure 1, the learned rare word representations can then be used as an improved input representation for another BERT model.",
"By giving BERTRAM access to both surface form and contexts starting at the lowest layer, a deep integration of both input signals becomes possible.",
"Assessing the effectiveness of methods like BERTRAM in a contextualized setting is challenging: While most previous work on rare words was evaluated on datasets explicitly focusing on rare words (e.g Luong et al., 2013; Herbelot and Baroni, 2017; Khodak et al., 2018; Liu et al., 2019a), these datasets are tailored to uncontextualized embeddings and thus not suitable for evaluating our model.",
"Furthermore, rare words are not well represented in commonly used downstream task datasets.",
"We therefore introduce rarification , a procedure to automatically convert evaluation datasets into ones for which rare words are guaranteed to be important.",
"This is achieved by replacing task-relevant frequent words with rare synonyms obtained using semantic resources such as WordNet (Miller, 1995).",
"We rarify three common text (or text pair) classification datasets: MNLI (Williams et al., 2018), AG's News (Zhang et al., 2015) and DBPedia (Lehmann et al., 2015).",
"BERTRAM outperforms previous work on four English datasets by a large margin: on the three rarified datasets and on WNLaMPro (Schick and Schutze, 2020).",
"In summary, our contributions are as follows: We introduce BERTRAM , a model that integrates BERT into Attentive Mimicking, enabling a deep integration of surface-form and contexts and much better representations for rare words.",
"We devise rarification, a method that transforms evaluation datasets into ones for which rare words are guaranteed to be important.",
"We show that adding BERTRAM to BERT achieves a new state-of-the-art on WNLaMPro (Schick and Schutze, 2020) and beats all baselines on rarified AG's News, MNLI and DBPedia, resulting in an absolute improvement of up to 25% over BERT.",
"Surface-form information (e.g., morphemes, characters or character n -grams) is commonly used to improve word representations.",
"For static word embeddings, this information can either be injected into a given embedding space (Luong et al., 2013; Pinter et al., 2017), or a model can directly be given access to it during training (Bojanowski et al., 2017; Salle and Villavicencio, 2018; Piktus et al., 2019).",
"In the area of contextualized representations, many architectures employ subword segmentation methods (e.g. Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019b).",
"Others use riding a un ##ic ##y ##cle is hard BERT a riding is hard BERTBERTRAMBERTRAM unicycle Figure 1: Top: Standard use of BERT.",
"convolutional neural networks to directly access character-level information (Kim et al., 2016; Peters et al., 2018; Baevski et al., 2019).",
"Complementary to surface form, another useful source of information for understanding rare words are the contexts in which they occur (Lazaridou et al., 2017; Herbelot and Baroni, 2017; Khodak et al., 2018).",
"Schick and Schutze (2019a,b) show that combining form and context leads to significantly better results than using just one of the two.",
"While all of these methods are bag-of-words models, Liu et al. (2019a) recently proposed an architecture based on context2vec (Melamud et al., 2016).",
"However, in contrast to our work, they",
"(i) do not incorporate surface-form information and",
"(ii) do not directly access the hidden states of context2vec, but instead simply use its output distribution.",
"Several datasets focus on rare words, e.g., Stanford Rare Word (Luong et al., 2013), Definitional Nonce (Herbelot and Baroni, 2017), and Contextual Rare Word (Khodak et al., 2018).",
"However, unlike our rarified datasets, they are only suitable for evaluating uncontextualized word representations.",
"Rarification is related to adversarial example generation (e.g. Ebrahimi et al., 2018), which manipulates the input to change a model's prediction.",
"We use a similar mechanism to determine which words in a given sentence are most important and replace them with rare synonyms.",
"We first review the basis for our new model, the form-context model (FCM) (Schick and Schutze, 2019b).",
"Given a set of d -dimensional high-quality embeddings for frequent words, FCM induces embeddings for rare words that are appropriate for the given embedding space.",
"This is done as follows: Given a word w and a context C in which it occurs, a surface-form embedding v form ( w,C ) R d is obtained by averaging over embeddings of all character n -grams in w ; the n -gram embeddings are learned during training.",
"Similarly, a context embedding v context ( w,C ) R d is obtained by averaging over the embeddings of all words in C .",
"Finally, both embeddings are combined using a gate g ( v form ( w,C ) , v context ( w,C ) ) = ( x (cid:62) [ v form ( w,C ) ; v context ( w,C ) ] + y ) with parameters x R 2 d , y R and denoting the sigmoid function, allowing the model to decide how to weight surface-form and context.",
"The final representation of w is then a weighted combination of form and context embeddings: v ( w,C ) = ( Av context ( w,C ) + b ) + (1 ) v form ( w,C ) where = g ( v form ( w,C ) , v context ( w,C ) ) and A R d d , b R d are parameters learned during training.",
"The context part of FCM is able to capture the broad topic of rare words, but since it is a bag-of-words model, it is not capable of obtaining a more concrete or detailed understanding (see Schick and Schutze, 2019b).",
"Furthermore, the simple gating mechanism results in only a shallow combination of form and context.",
"That is, the model is not able to combine form and context until the very last step: While it can learn to weight form and context components, the two embeddings (form and context) do not share any information and thus do not influence each other.",
"To overcome these limitations, we introduce BERTRAM , a model that combines a pretrained BERT language model (Devlin et al., 2019) with Attentive Mimicking (Schick and Schutze, 2019a).",
"We denote with e t the (uncontextualized, i.e., first-layer) embedding assigned to a (wordpiece) token t by BERT.",
"Given a sequence of such uncontextualized embeddings e = e 1 , . . . , e n , we denote by h j ( e ) the contextualized representation of the j -th token at the final layer when the model is given e as input.",
"Given a word w and a context C in which it occurs, let t = t 1 , . . . , t m be the sequence obtained from C by",
"(i) replacing w with a [MASK] token and",
"(ii) tokenization (matching BERT's vocabu-lary); furthermore, let i denote the index for which t i = [MASK] .",
"We experiment with three variants of BERTRAM : BERTRAM-SHALLOW , BERTRAMREPLACE and BERTRAM-ADD .",
"2 SHALLOW .",
"Perhaps the simplest approach for obtaining a context embedding from C using BERT is to define v context ( w,C ) = h i ( e t 1 , . . . , e t m ) .",
"This approach aligns well with BERT's pretraining objective of predicting likely substitutes for [MASK] tokens from their contexts.",
"The context embedding v context ( w,C ) is then combined with its form counterpart as in FCM.",
"While this achieves our first goal of using a more sophisticated context model that goes beyond bag-of-words, it still only combines form and context in a shallow fashion.",
"REPLACE .",
"Before computing the context embedding, we replace the uncontextualized embedding of the [MASK] token with the word's surface-form embedding: v context ( w,C ) = h i ( e t 1 , ... , e t i 1 , v form ( w,C ) , e t i +1 , ... , e t m ) .",
"Our rationale for this is as follows: During regular BERT pretraining, words chosen for prediction are replaced with [MASK] tokens only 80% of the time and kept unchanged 10% of the time.",
"Thus, standard pretrained BERT should be able to make use of form embeddings presented this way as they provide a strong signal with regards to how the correct embedding of w may look like.",
"ADD .",
"Before computing the context embedding, we prepad the input with the surface-form embedding of w , followed by a colon ( e : ): 3 v context ( w,C ) = h i +2 ( v form ( w,C ) , e : , e t 1 , . . . , e t m ) .",
"The intuition behind this third variant is that lexical definitions and explanations of a word w are occasionally prefixed by w : (e.g., in some online dictionaries).",
"We assume that BERT has seen many definitional sentences of this kind during pretraining and is thus able to leverage surface-form information about w presented this way.",
"computation of the context embedding; thus, we do not require any gating mechanism and directly set v ( w,C ) = A v context ( w,C ) + b .",
"Figure 2 (left) shows how a single context is processed using ADD .",
"To exploit multiple contexts of a word if available, we follow the approach of Schick and Schutze (2019a) and add an AM layer on top of our model; see Figure 2 (right).",
"Given a set of contexts C = { C 1 , . . . , C m } and the corresponding embeddings v ( w,C 1 ) , . . . , v ( w,C m ) , AM applies a self-attention mechanism to all embeddings, allowing the model to distinguish informative from uninformative contexts.",
"The final embedding v ( w, C ) is then a weighted combination of all embeddings: v ( w, C ) = (cid:88) m i =1 i v ( w,C i ) where the self-attention layer determines the weights i subject to (cid:80) mi =1 i = 1 .",
"For further details, see Schick and Schutze (2019a).",
"Like previous work, we use mimicking (Pinter et al., 2017) as a training objective.",
"That is, given a frequent word w with known embedding e w and a set of corresponding contexts C , BERTRAM is trained to minimize (cid:107) e w v ( w, C ) (cid:107) 2 .",
"Training BERTRAM end-to-end is costly: the cost of processing a single training instance ( w, C ) with C = { C 1 , . . . , C m } is the same as processing an entire batch of m examples in standard BERT.",
"Therefore, we resort to the following three-stage training process: 1. We train only the context part, minimizing (cid:107) e w A ( (cid:80) mi =1 i v context ( w,C i ) ) + b (cid:107) 2 where i is the weight assigned to each context C i through the AM layer.",
"Regardless of the selected BERTRAM variant, the context embedding is always obtained using SHALLOW in this stage.",
"Furthermore, only A , b and all parameters of the AM layer are optimized.",
"2. We train only the form part (i.e., only the n gram embeddings); our loss for a single example ( w, C ) is (cid:107) e w v form ( w, C ) (cid:107) 2 .",
"Training in this stage is completely detached from the underlying BERT model.",
"3. In the third stage, we combine the pretrained form-only and context-only models and train all parameters.",
"The first two stages are only run once and then used for all three BERTRAM variants because context and form are trained in isolation.",
"The third stage must be run for each variant separately.",
"We freeze all of BERT's parameters during training as we somewhat surprisingly found that this slightly improves the model's performance while speeding up training.",
"For ADD , we additionally found it helpful to freeze the form part in the third training stage.",
"Importantly, for the first two stages of our training procedure, we do not have to back-propagate through BERT to obtain all required gradients, drastically increasing the training speed.",
"The ideal dataset for measuring the quality of rare word representations would be one for which the accuracy of a model with no understanding of rare words is 0% whereas the accuracy of a model that perfectly understands rare words is 100%.",
"Unfortunately, existing datasets do not satisfy this desideratum, not least because rare words by their nature occur rarely.",
"This does not mean that rare words are not important: As we shift our focus in NLP from words and sentences as the main unit of processing to larger units like paragraphs and documents, rare words will occur in a high proportion of such larger evaluation units.",
"Rare words are also clearly a hallmark of human language competence, which should be the ultimate goal of NLP.",
"Our work is part of a trend that sees a need for evaluation tasks in NLP that are more ambitious than what we have now.",
"4 To create more challenging datasets, we use rarification , a procedure that automatically transforms existing text classification datasets in such a way that rare words become important.",
"We require a pretrained language model M as a baseline, an arbitrary text classification dataset D containing labeled instances ( x , y ) and a substitution dictionary S , mapping each word w to a set of rare synonyms S ( w ) .",
"Given these ingredients, our procedure consists of three steps:",
"(i) splitting the dataset into a train set and a set of test candidates,",
"(ii) training the baseline model on the train set and",
"(iii) modifying a subset of the test candidates to generate the final test set.",
"Dataset Splitting.",
"We partition D into a training set D train and a set of test candidates , D cand .",
"D cand contains all instances ( x , y ) D such that for at least one word w in x , S ( w ) (cid:54) = subject to the constraint that the training set contains at least one third of the entire data.",
"Baseline Training.",
"We finetune M on D train .",
"Let ( x , y ) D train where x = w 1 , . . . , w n is a sequence of words.",
"We deviate from the finetuning procedure of Devlin et al. (2019) in three respects: We randomly replace 5% of all words in x with a [MASK] token.",
"This allows the model to cope with missing or unknown words, a prerequisite for our final test set generation.",
"As an alternative to overwriting the language model's uncontextualized embeddings for rare words, we also want to allow models to add an alternative representation during test time, in 4 Cf.",
"(Bowman, 2019): If we want to be able to establish fair benchmarks that encourage future progress toward robust, human-like language understanding, we'll need to get better at creating clean, challenging, and realistic test datasets. which case we simply separate both representations by a slash (cf. 5.3).",
"To accustom the language model to this duplication of words, we replace each word w i with w i / w i with a probability of 10%.",
"To make sure that the model does not simply learn to always focus on the first instance during training, we randomly mask each of the two repetitions with probability 25%.",
"We do not finetune the model's embedding layer.",
"We found that this does not hurt performance, an observation in line with recent findings of Lee et al. (2019).",
"Test Set Generation.",
"Let p ( y | x ) be the probability that the finetuned model M assigns to class y given input x , and M ( x ) = arg max y Y p ( y | x ) be the model's prediction for input x where Y denotes the set of all labels.",
"For generating our test set, we only consider candidates that are classified correctly by the baseline model, i.e., candidates ( x , y ) D cand with M ( x ) = y .",
"For each such entry, let x = w 1 , . . . , w n and let x w i = t be the sequence obtained from x by replacing w i with t .",
"We compute w i = arg min w j : S ( w j ) (cid:54) = p ( y | x w j = [MASK] ) , i.e., we select the word w i whose masking pushes the model's prediction the farthest away from the correct label.",
"If removing this word already changes the model's prediction that is, M ( x w i = [MASK] ) (cid:54) = y , we select a random rare synonym w i S ( w i ) and add ( x w i = w i , y ) to the test set.",
"Otherwise, we repeat the above procedure; if the label still has not changed after masking up to 5 words, we discard the candidate.",
"Each instance ( x w i 1 = w i 1 ,...,w i k = w i k , y ) of the resulting test set has the following properties: If each w i j is replaced by [MASK] , the entry is classified incorrectly by M .",
"In other words, understanding the words w i j is necessary for M to determine the correct label.",
"If the model's internal representation of each w i j is sufficiently similar to its representation of w i j , the entry is classified correctly by M .",
"That is, if the model is able to understand the rare words w i j and to identify them as synonyms of w i j , it will predict the correct label.",
"Note that the test set is closely coupled to the baseline model M because we select the words to be replaced based on M 's predictions.",
"Importantly, however, the model is never queried with any rare synonym during test set generation, so its representations of rare words are not taken into account for creating the test set.",
"Thus, while the test set is not suitable for comparing M with an entirely different model M (cid:48) , it allows us to compare various strategies for representing rare words in the embedding space of M .",
"Definitional Nonce (Herbelot and Baroni, 2017) is subject to a similar constraint: it is tied to a specific (uncontextualized) embedding space based on Word2Vec (Mikolov et al., 2013).",
"For our evaluation of BERTRAM , we follow the experimental setup of Schick and Schutze (2020).",
"We experiment with integrating BERTRAM both into BERT base and RoBERTa large (Liu et al., 2019b).",
"Throughout our experiments, when BERTRAM is used to provide input representations for one of the two models, we use the same model as BERTRAM 's underlying language model.",
"Further training speci-fications can be found in Appendix A. While BERT was trained on BookCorpus (Zhu et al., 2015) and a large Wikipedia dump, we follow previous work and train BERTRAM only on the much smaller Westbury Wikipedia Corpus (WWC) (Shaoul and Westbury, 2010); this of course gives BERT a clear advantage over BERTRAM .",
"This advantage is even more pronounced when comparing BERTRAM with RoBERTa, which is trained on a corpus that is an order of magnitude larger than the original BERT corpus.",
"We try to at least partially Task Entry MNLI i think i will go finish up my laundry washables .",
"compensate for this as follows: In our downstream task experiments, we gather the set of contexts C for each word from WWC+BookCorpus during inference.",
"5 5.2 WNLaMPro We evaluate BERTRAM on the WNLaMPro dataset (Schick and Schutze, 2020).",
"This dataset consists of cloze-style phrases like A lingonberry is a . and the task is to correctly fill the slot ( ) with one of several acceptable target words (e.g., fruit, bush or berry), which requires understanding of the meaning of the phrase's keyword (lingonberry in the example).",
"As the goal of this dataset is to probe a language model's ability to understand rare words without any task-specific finetuning, Schick and Schutze (2020) do not provide a training set.",
"The dataset is partitioned into three subsets based on the keyword's frequency in WWC: RARE (oc-curring fewer than 10 times) MEDIUM (occurring between 10 and 100 times), and FREQUENT (all remaining words).",
"For our evaluation, we compare the performance of a standalone BERT (or RoBERTa) model with one that uses BERTRAM as shown in Figure 1 (bot-tom).",
"As our focus is to improve representations for rare words, we evaluate our model only on WNLaMPro RARE and MEDIUM .",
"Table 1 gives results; our measure is mean reciprocal rank (MRR).",
"We see that supplementing BERT with any of the proposed methods results in noticeable improvements for the RARE subset, with ADD clearly outperforming SHALLOW and REPLACE .",
"Moreover, ADD performs surprisingly well for more frequent words, improving the score for WNLaMProMEDIUM by 5 We recreate BookCorpus with the script at github.",
"com/soskek/bookcorpus .",
"We refer to the joined corpus of WWC and BookCorpus as WWC+BookCorpus.",
"58% compared to BERT base and 37% compared to Attentive Mimicking.",
"This makes sense considering that the key enhancement of BERTRAM over AM lies in improving context representations and interconnection of form and context; the more contexts are given, the more this comes into play.",
"Noticeably, despite being both based on and integrated into a BERT base model, our architecture even outperforms BERT large by a large margin.",
"While RoBERTa performs much better than BERT on WNLaMPro, BERTRAM still significantly improves results for both rare and medium frequency words.",
"As it performs best for both the RARE and MEDIUM subset, we always use the ADD configuration of BERTRAM in the following experiments.",
"To measure the effect of adding BERTRAM to pretrained deep language model on downstream tasks, we rarify (cf. 4) the following three datasets:",
"MNLI (Williams et al., 2018), a natural language inference dataset where given two sentences a and b , the task is to decide whether a entails b , a and b contradict each other or neither; AG's News (Zhang et al., 2015), a news classification dataset with four different categories ( world , sports , business and science/tech ); DBPedia (Lehmann et al., 2015), an ontology dataset with 14 classes (e.g., company , artist ) that have to be identified from text snippets.",
"For all three datasets, we create rarified instances both using BERT base and RoBERTa large as a baseline model and build the substitution dictionary S using the synonym relation of WordNet (Miller, 1995) and the pattern library (Smedt and Daele-mans, 2012) to make sure that all synonyms have consistent parts of speech.",
"Furthermore, we only consider synonyms for each word's most frequent sense; this filters out much noise and improves the quality of the created sentences.",
"In addition to WordNet, we use the misspelling dataset of Piktus et al. (2019).",
"To prevent misspellings from dominating the resulting datasets, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence.",
"Motivated by the results on WNLaMProMEDIUM , we consider every word that occurs less than 100 times in WWC+BookCorpus as being rare.",
"Example entries from the rarified datasets obtained using BERT base as a baseline model can be seen in Table 2. The average number of words replaced with synonyms or misspellings is 1 .",
"38 , 1 .",
"82 and 2 .",
"34 for MNLI, AG's News and DBPedia, respectively.",
"Our default way of injecting BERTRAM embeddings into the baseline model is to replace the sequence of uncontextualized subword token embeddings for a given rare word with its BERTRAM based embedding (Figure 1, bottom).",
"That is, given a sequence of uncontextualized token embeddings e = e 1 , . . . , e n where e i , . . . , e j with 1 i j n is the sequence of embeddings for a single rare word w with BERTRAM -based embedding v ( w, C ) , we replace e with e (cid:48) = e 1 , . . . , e i 1 , v ( w, C ) , e j +1 , . . . , e n .",
"As an alternative to replacing the original sequence of subword embeddings for a given rare word, we also consider BERTRAM-SLASH , a configuration where the BERTRAM -based embedding is simply added and both representations are separated using a single slash: e SLASH = e 1 , . . . , e j , e / , v ( w, C ) , e j +1 , . . . , e n .",
"The intuition behind this variant is that in BERT's pretraining corpus, a slash is often used to separate two variants of the same word (e.g., useable / us-able) or two closely related concepts (e.g., com-pany / organization, web-based / cloud) and thus, BERT should be able to understand that both e i , . . . , e j and v ( w, C ) refer to the same entity.",
"We therefore surmise that whenever some information is encoded in one representation but not in the other, giving BERT both representations is helpful.",
"By default, the set of contexts C for each word is obtained by collecting all sentences from WWC+BookCorpus in which it occurs.",
"We also try a variant where we add in-domain contexts by giving BERTRAM access to all texts (but not labels) found in the test set; we refer to this variant as INDOMAIN .",
"6 Our motivation for including this variant is as follows: Moving from the training stage of a model to its production use often causes a slight domain shift.",
"This is turn leads to an increased number of input sentences containing words that did not or only very rarely appear in the training data.",
"However, such input sentences can easily be collected as additional unlabeled examples during production use.",
"While there is no straightforward way to leverage these unlabeled examples with an already finetuned BERT model, BERTRAM can easily make use of them without requiring any labels or any further training: They can simply be included as additional contexts during inference.",
"As this gives BERTRAM a slight advantage, we also report results for all configurations without using indomain data.",
"Importantly, adding indomain data increases the number of contexts for more than 90% of all rare words by at most 3, meaning that they can still be considered rare despite the additional indomain contexts.",
"Table 3 reports, for each task, the accuracy on the entire dataset (All) as well as scores obtained considering only instances where at least one word was replaced by a misspelling (Msp) or a WordNet synonym (WN), respectively.",
"7 Consistent with results 6 For the MNLI dataset, which consists of text pairs ( a, b ) , we treat a and b as separate contexts.",
"7 Note that results for BERT and RoBERTa are only loosely comparable because the datasets generated from both baseline models through rarification are different.",
"on WNLaMPro, combining BERT with BERTRAM consistently outperforms both a standalone BERT model and one combined with various baseline models.",
"Using the SLASH variant brings improvements across all datasets as does adding INDOMAIN contexts (exception: BERT/AG's News).",
"This makes sense considering that for a rare word, every single additional context can be crucial for gaining a deeper understanding.",
"Correspondingly, it is not surprising that the benefit of adding BERTRAM to RoBERTa is less pronounced, because BERTRAM uses only a fraction of the contexts available to RoBERTa during pretraining.",
"Nonetheless, adding BERTRAM significantly improves RoBERTa's accuracy for all three datasets both with and without adding INDOMAIN contexts.",
"To further understand for which words using BERTRAM is helpful, Figure 3 looks at the accuracy of BERT base both with and without BERTRAM as a function of word frequency.",
"That is, we compute the accuracy scores for both models when considering only entries ( x w i 1 = w i 1 ,...,w ik = w ik , y ) where each substituted word w i j occurs less than c max times in WWC+BookCorpus, for different values of c max .",
"As one would expect, c max is positively correlated with the accuracies of both models, showing that the rarer a word is, the harder it is to understand.",
"Interestingly, the gap between standalone BERT and BERT with BERTRAM remains more or less constant regardless of c max .",
"This suggests that using BERTRAM may even be helpful for more frequent words.",
"To investigate this hypothesis, we perform another rarification of MNLI that differs from the [0 , 125) [125 , 250) [250 , 500) [500 , ) 0 2 4 6 8 10 Word counts A cc u r ac y i m p r ov e m e n t BERT+B SL RoBERTa+B SL BERT+B SL +ID RoBERTa+B SL +ID Figure 4: Improvements for BERT (base) and RoBERTa (large) when adding BERTRAM-SLASH (+B SL ) or BERTRAM-SLASH + INDOMAIN (+B SL +ID) on MNLI-1000 previous rarification in two respects.",
"First, we in-crease the threshold for a word to count as rare from 100 to 1000.",
"Second, as this means that we have more WordNet synonyms available, we do not use the misspelling dictionary (Piktus et al., 2019) for substitution.",
"We refer to the resulting datasets for BERT base and RoBERTa large as MNLI-1000 .",
"Figure 4 shows results on MNLI-1000 for various rare word frequency ranges.",
"For each value [ c 0 , c 1 ) on the x -axis, the y -axis shows improvement in accuracy compared to standalone BERT or RoBERTa when only dataset entries are considered for which each rarified word occurs between c 0 (inclusively) and c 1 (exclusively) times in WWC+BooksCorpus.",
"We see that for words with frequency less than 125, the improvement in accuracy remains similar even without using misspellings as another source of substitutions.",
"Interestingly, for every single interval of rare word counts considered, adding BERTRAM-SLASH to BERT considerably improves its accuracy.",
"For RoBERTa, adding BERTRAM brings improvements only for words occurring less than 500 times.",
"While using INDOMAIN data is beneficial for rare words simply because it gives us additional contexts for these words , when considering only words that occur at least 250 times in WWC+BookCorpus, adding INDOMAIN contexts does not help.",
"We have introduced BERTRAM , a novel architecture for inducing high-quality representations for",
"rare words in BERT's and RoBERTa's embedding spaces.",
"This is achieved by employing a powerful pretrained language model and deeply integrating surface-form and context information.",
"By replacing important words with rare synonyms, we created downstream task datasets that are more challenging and support the evaluation of NLP models on the task of understanding rare words, a capability that human speakers have.",
"On all of these datasets, BERTRAM improves over standard BERT and RoBERTa, demonstrating the usefulness of our method.",
"Our analysis showed that BERTRAM is beneficial not only for rare words (our main target in this paper), but also for frequent words.",
"In future work, we want to investigate BERTRAM 's potential bene-fits for such frequent words.",
"Furthermore, it would be interesting to explore more complex ways of incorporating surface-form information e.g., by using a character-level CNN similar to the one of Kim et al. (2016) to balance out the potency of BERTRAM 's form and context parts.",
"This work was funded by the European Research Council (ERC #740516).",
"We would like to thank the anonymous reviewers for their helpful comments."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"objective",
"result",
"objective",
"abstain",
"other",
"other"
] |
[
"Automated scientific fact checking is difficult due to the complexity of scientific language and a lack of significant amounts of training data, as annotation requires domain expertise.",
"To address this challenge, we propose scientific claim generation, the task of generating one or more atomic and verifiable claims from scientific sentences, and demonstrate its usefulness in zero-shot fact checking for biomedical claims.",
"We propose CLAIMGEN-BART, a new supervised method for generating claims supported by the literature, as well as KBIN, a novel method for generating claim negations.",
"Additionally, we adapt an existing unsupervised entity-centric method of claim generation to biomedical claims, which we call CLAIMGEN-ENTITY .",
"Experiments on zero-shot fact checking demonstrate that both CLAIMGEN-ENTITY and CLAIMGEN-BART, coupled with KBIN, achieve up to 90% performance of fully supervised models trained on manually annotated claims and evidence.",
"A rigorous evaluation study demonstrates significant improvement in generated claim and negation quality over existing baselines.",
"1 1 Introduction Scientific documents contain complex assertions about scientific processes, making it difficult to automate important tasks such as claim extraction and scientific fact checking.",
"Additionally, the collection of manually annotated labels to train models on tasks with scientific data is time consuming and expensive due to the need for domain expertise (Collins et al., 2017; Augenstein and Sgaard, 2017; Lehman et al., 2019; Wadden et al., 2020; DeYoung et al., 2021).",
"As such, methods which require less manual annotation are especially useful in this domain.",
"This work addresses this challenge Work completed while an intern at AI2 1 Code and data available at: https://github.com/ allenai/scientific-claim-generation (1) ALS is the most common adult motor neuron disease with an incidence of 2 per 100,000 and prevalence of 5.4 per 100,000 individuals.",
"by exploring how automatic generation of scientific claims can assist with dataset creation and zero-shot fact checking in the biomedical domain.",
"Being able to reduce scientific text to atomic assertions has numerous possible applications, and is known to be helpful for scientific communication and machine processing of scientific concepts (Kuhn et al., 2013).",
"Claim generation can enable zero-shot fact checking, reducing the need for expert-labeled data (Pan et al., 2021), and can be used to expand existing datasets such as Wadden et al. (2020) and Saakyan et al. (2021) without additional manual annotation.",
"In this work we focus on the use of claim generation in scientific fact checking, demonstrating that claim generation enables zero-shot biomedical fact checking.",
"Generating scientific claims involves distilling a complex scientific sentence into one or more valid claims (see examples in Figure 1).",
"As in previous 2448 work, we focus on biomedical claims as biomedical literature has long been a major focus in scientific natural language processing, as well as scientific fact checking (Saakyan et al., 2021; Wadden et al., 2020; Kotonya and Toni, 2020).",
"While in Wadden et al. (2020), claims were rewritten by domain experts from complex citation sentences (citances), we propose methods for automatically generating claims and claim negations from this source.",
"Similar to other generation tasks, evaluating the quality of generated output requires multiple judgements beyond the fluency of the generated text, e.g., whether each claim is faithful to the source sentence, and is understandable on its own (Sai et al., 2020).",
"However, there are also other quality attributes that are important to assess specifically for scientific claims, such as whether each claim is atomic or check-worthy (Wright and Augenstein, 2020).",
"Given this, we propose a set of manual evaluation criteria and annotation guidelines for evaluating claim generation (5.2).",
"Additionally, when generating claims to build datasets for tasks such as fact checking, a major challenge is creating refuted claims as negative training instances.",
"Previous work has proposed automatic ways of generating refutations based on negating existing claims or creating claim variants via entity-replacement (Pan et al., 2021) and text-infilling using a pre-trained masked language model (Saakyan et al., 2021).",
"We improve upon this by introducing Knowledge Base Informed Negations (KBIN), a principled method to generate refutations that performs entity-replacement using the relations and learned embeddings of entities in a domain-specific knowledge base.",
"The first study on scientific claim generation, comparing both unsupervised (CLAIMGENENTITY ) and fully supervised (CLAIMGENBART) generation on biomedical text.",
"KBIN, a novel method for generating refuted scientific claims which produces more convincing negations than previous work.",
"Application of our claim generation methods on zero-shot scientific fact checking resulting in 90% of the performance of a model trained on in-domain manually written claims.",
"Additionally, a rigorous evaluation study showing that CLAIMGEN-BART and KBIN produce significantly higher quality claims and more convincing negations than previous work.",
"Valid Claims In this work, we define a valid claim as one which is fluent, atomic, de-contextualized, and accurately reflects the meaning of the original sentence.",
"Fluency is concerned with a claim being a generally well-formed English sentence, and atomicity with a claim being a ver-ifiable statement expressing a finding about one aspect of a scientific entity or process, which can be verified from a single source (Wadden et al., 2020).",
"De-contextualilzation is concerned with a sentence being interpretable on its own, requiring none of the original surrounding text to resolve aspects of the sentence such as pronouns, abbreviations, etc., and can be handled by either directly de-contextualizing a sentence (Choi et al., 2021) or by ensuring that all of the context sentences are available to a model (Wadden et al., 2021).",
"Check-worthy claims in the wild may not be fluent, atomic, or de-contextualized, however it is useful to generate such claims as they have been shown to be useful for automated processing of science concepts (Kuhn et al., 2013) and scientific fact checking (Wadden et al., 2020).",
"Scientific Claim Generation At a high level, scientific claim generation is the task of distilling one or more valid claims from one or more sentences concerned with a scientific fact.",
"More specifically, the task is defined as: given a scientific sentence s and optionally additional context sentences X , generate one or more claims c i C which are valid and entailed by s and X .",
"In the context of fact checking, we must generate claims which are either supported or refuted by the literature, as well as those for which not enough information is present to make a veracity judgement, in order that they may be paired with appropriate evidence documents to serve as training data for fact checking systems.",
"As such, we require methods which can take the claims in C which are entailed by the source sentence and generate negations to acquire refuted claims.",
"We experiment with two generation methods designed to produce claims which are supported by the source sentence.",
"The first method is an entity-centric unsupervised method adapted from Pan et al. (2021) which requires no <sentence, claim> pairs (CLAIMGEN-ENTITY ).",
"We also introduce a new 2449 Exergames improve function and reduce the risk of falls.",
"method that uses BART (Lewis et al., 2020) trained on a small set of <sentence, claim> pairs to directly generate claims (CLAIMGEN-BART).",
"For each sample i , we refer to the input source sentence as s i , the context sentences as x ( i ) l X i and the output claims as C i consisting of k claims { c ( i ) 1 . . . c ( i ) k } Following Wadden et al. (2020), we use citation sentences as unlabelled sentences for generation since these provide a natural link to an evidence document.",
"Various components of our modeling pipelines take advantage of models pretrained on datasets for NER, NLI, QA, and fact-checking.",
"We provide an overview of these datasets in A.4.",
"We adapt the entity-centric method presented in Pan et al. (2021) as an unsupervised claim generation approach.",
"This method has been tested on general domain fact checking, but has not been used for science claim generation and zero-shot scientific fact checking.",
"In particular, we re-implement the base method used for generating supported claims and adapt it to the biomedical domain, substituting in a domain specific model for named-entity recognition.",
"The method consists of the following steps for a given sample i : 1. Run named entity recognition (NER) on the input text to obtain a set of named entities E i .",
"2. For each named entity e ( i ) j , generate a question q ( i ) j about that entity which can be answered from s i .",
"3. From q ( i ) j , generate the declarative form of the question to obtain claim c ( i ) j .",
"pipeline for scientific NLP.",
"The NER model is trained on the MedMentions dataset (Mohan and Li, 2019), which consists of 4,392 PubMed abstracts exhaustively annotated for mentions of UMLS entities (Bodenreider, 2004).",
"Question Generation For question generation, we use BART trained on questions from SQuAD (Rajpurkar et al., 2016).",
"As input for training, we encode a concatenation of the context and answer text from a given SQuAD question, and train the model to decode the question.",
"During inference, we concatenate the source sentence s i and an entity e ( i ) j and sample a question q ( i ) j for this pair using beam search.",
"Question to Claim Finally, as in Pan et al. (2021), we use a second BART model to generate declarative claims from questions.",
"We train the model on the QA2D dataset (Demszky et al., 2018), which contains declarative full sentences paired with questions and their answer from SQuAD.",
"The model is trained by encoding a concatenation of the question and answer, and decoding the full declarative sentence.",
"At inference time, we concatenate and encode q ( i ) j and e ( i ) j , and use beam search at the decoder to generate a claim c ( i ) j .",
"We introduce a fully-supervised model for claim generation based on BART trained on <citance, claim> pairs.",
"For this, we use the manual citance re-writes released by the SciFact authors, 3 which consist of citances from scientific papers rewritten as one or more atomic claims which are directly entailed by the citance.",
"3 https://github.com/allenai/scifact/blob/master/doc/claims-with-citances.md 2450 Algorithm 1 KBIN algorithm 1: function GETNEGATION ( c, KB , V, N ) 2: E NER ( c ) 3: C [] 4: for e j in E do 5: u j LINK ( e j ) 6: R KB.siblings ( u j ) 7: filter ( R, KB.type ( u j )) 8: dist cosdist ( V [ u j ] , V [ R ]) 9: for r in argsort ( dist )[: N ] do 10: A KB.aliases ( R [ r ]) 11: T replace ( c, e j , a ) for a in A 12: C",
"For training, we encode the citance, as well as the sentences immediately before and after the citance (the context), and train the decoder to generate claims directly.",
"We choose to encode the context as well to help de-contextualize generated claims.",
"We concatenate the citance and context using a double pipe (i.e. X i || s i ), and train the encoder to generate one claim at a time.",
"We use topk sampling to generate multiple claims, with k set to the number of noun chunks in the original source citance.",
"4 4 Knowledge Base Informed Negations CLAIMGEN-ENTITY and CLAIMGEN-BART only produce claims which are entailed by the source sentence.",
"Additionally, we are interested in producing claim variants which are directly refuted by the original sentence, as these negations are needed when building fact checking datasets and for training fact checking models.",
"Work in Wadden et al. (2020) created these negations manually, and some work has begun to explore automatically generating these negations for scientific claims (Saakyan et al., 2021).",
"To this end, we leverage the availability of large curated biomedical knowledge bases to develop a principled approach to claim variant generation.",
"In particular, we use the UMLS metathesaurus (Bodenreider, 2004), which unifies hundreds of different ontologies in biomedicine, as a source of term replacements for negations.",
"We provide an overview of the KBIN algorithm 4 We use scispaCy to identify noun chunks in Algorithm 1 and Figure",
"2. KBIN works by first performing NER on an input claim c , obtaining entities { e 1 , . . . , e n } E .",
"For each entity e j in E , we link the entity to its unique concept u j in UMLS using the scispaCy entity linker.",
"If the entity is linked, we select all concepts which are siblings to u j in the concept hierarchy, and which have the same semantic type (e.g. Clinical Drug).",
"We rank all selected concepts by their cosine distance to the entity concept using pre-trained UMLS concept vectors, retaining the top 20 closest concepts.",
"For this, we use cui2vec (Beam et al., 2020), which contains pre-trained concept vectors for 108,477 concepts from UMLS trained on medical documents from diverse sources.",
"For each of the related concepts, we generate candidate claim variants by replacing the entity text in the original claim with the canonical name and aliases of the related concept from UMLS.",
"We rank all replacement sentences by their perplexity using a pre-trained GPT-2 model (Radford et al., 2019), keeping the sentence with least perplexity for each replacement.",
"Finally, from among these most fluent sentences, we select the replacement which maximizes the NLI prediction of contradiction with the original claim.",
"For this, we use a RoBERTa model (Liu et al., 2019) pre-trained on MNLI (Williams et al., 2018).",
"RQ1 Do automatically generated claims enable zero-shot scientific fact checking?",
"RQ2 What is the percentage of high-quality claims generated using our methods?",
"RQ3 How does KBIN compare with previous work for claim negation in terms of generating contradictions?",
"For RQ1 , we use CLAIMGEN-ENTITY and CLAIMGEN-BART generated claims to train a fact checking model, evaluating on the SciFact dataset (Wadden et al., 2020) and comparing to relevant baselines.",
"To answer RQ2 and RQ3 , we design annotation criteria and perform manual evaluations with a group of expert annotators (details in 5.2).",
"SciFact Task The SciFact fact verification task consists of: given a claim c and a corpus of scientific abstracts D , retrieve evidence abstracts from",
"D , predict if the claim is supported or refuted by those documents or if there is not enough information (NEI) to make a prediction, and optionally determine what the rationale sentences are that explain the prediction.",
"Here we focus on the oracle abstract setting of the task, in which gold abstracts are provided to the model and there is no retrieval component.",
"This setup exists in the scientific fact checking literature (Saakyan et al., 2021), and allows us to focus on one component of the fact checking pipeline for evaluating the impacts of claim generation.",
"Creating Training Data for the Zero-shot Setting We require a set of claim-abstract pairs for training where the abstract either supports, refutes, or does not provide evidence for the given claim.",
"We exploit citation relationships to generate claims paired with potential evidence, using citances from the CiteWorth dataset (Wright and Augenstein, 2021) as source citances for generation.",
"Supports claims are produced by directly pairing a generated claim with the abstracts of documents cited by the source citance.",
"For refutes claims, we negate a generated claim using KBIN and pair it with the same abstract.",
"For claims labelled NEI , we pair the generated claim or negated claim with the abstract of the source document of the citance; the source document is related to the claim but presumably does not directly support or refute the claim given the need for a citation.",
"Experimental Setup In our experimental setup, we use LongChecker (Wadden et al., 2021), a Long-former (Beltagy et al., 2020) model adapted for scientific fact checking.",
"The model forms its input by concatenating a claim with its evidence abstract, inserting separator tokens between sentences, and uses a classification head to predict the veracity label from the representation of the [CLS] token.",
"We explore several different setups for our training data.",
"As a baseline, we experiment with pretraining only on FEVER claims (Thorne et al., 2018), which are general domain fact checking data based on Wikipedia.",
"We also include an experiment where we manually tune a threshold for the prediction of NEI on the SciFact training data, as we saw that the model tends to overpredict this label without any fine-tuning on in-domain data.",
"We also provide an upper bound on performance by fine-tuning on the in-domain train split of SciFact.",
"Finally, we experiment with both Method P R F1 FEVER only 86 .",
"CLAIMGEN-ENTITY and CLAIMGEN-BART as sources of training data generated from CiteWorth citances, pairing both with KBIN for negations.",
"We note that though CLAIMGEN-BART requires manually re-written claims as training data for generating supports claims, it does not use any claims paired with evidence manually labelled for veracity, thus making it zero-shot for the SciFact fact-checking task.",
"In all cases we test on the SciFact dev split.",
"Hyperparameter information, including number of training instances, is given in A.3, and code and data will be released upon paper acceptance.",
"In all cases, results are reported as macro-F1.",
"Results Our results on SciFact are given in Table",
"1. With an upper bound of 77.70 F1, we see that a model fine-tuned on automatically generated claims is able to achieve within 90% of the performance of a model trained on in-domain manually written claims.",
"This is also invariant to the method used to generate claims, as both CLAIMGENENTITY and CLAIMGEN-BART produce similar results.",
"Additionally, both methods provide significant gains over pre-training on FEVER only, especially when no threshold on NEI claims is used but also when re-calibrating the model to predict NEI less often.",
"Next, we explore if there are differences between our methods in terms of claim quality and the percentage of valid claims.",
"For this, we ask three expert annotators to manually assess generated claims along a number of quality criteria.",
"One annotator has undergraduate training in the life sciences and graduate training in computer science; the other two annotators have undergraduate training in the life sciences and materials science respectively.",
"We define a set of criteria for evaluation, given in Table",
"2. These criteria are inspired by the AIDA (Atomic, Independent, Declarative, and Absolute) 2452 Metric Labels Fluency 3 The claim contains no grammatical errors and its meaning can be understood 2 The claim contains some grammatical errors but is still understandable 1The claim contains many grammatical errors and cannot be understood De-Contextualized 1 The claim is interpretable on its own and requires no context; the addition of the original context does not alter the meaning of the claim 0 The claim cannot be interpreted in a meaningful way without the original context Atomicity 1 The claim is about a single entity/process (atomic) 0 The claim is non-atomic and can be broken down into multiple claims Faithfulness 5 The claim is correct and fully supported and complete with respect to the original sentence and context 4 The claim is correct with respect to the original sentence and context but leaves out information from the original sentence and context 3 The claim is related to the original sentence and does not contain incorrect information but is not explicitly stated in the original sentence 2 The claim contains explicitly incorrect information relative to the original sentence and context 1 The claim has nothing to do with the original sentence Table 2: Claim quality evaluation metrics and their possible values framework for scientific claims introduced in Kuhn et al. (2013).",
"They are also based on similar human evaluation criteria used to assess generation quality for related tasks (Sai et al., 2020).",
"We develop an initial set of guidelines for the annotators and conduct two rounds of pilot annotations to improve instructions and increase agreement.",
"For the final evaluation, we generate claims on a set of 100 citances sampled from the CiteWorth dataset (Wright and Augenstein, 2021), which contains citations in context for over 1M citances spanning 10 domains.",
"We limit the citances to those from papers in biology and medicine to match the domain of SciFact.",
"Annotator agreement is measured as Krippendorff's (Krippendorff, 2011) on 236 claims for each category except fluency, where we measure the percentage of claims where all annotators agree.",
"5 The annotators then assess 1,049 total claims (including the 236 shared claims).",
"Each annotator rates all criteria for an individual claim, starting with fluency, then de-contextualized, then atomicity, then faithfulness.",
"We are mainly interested in claim quality and yield, so annotators only annotate de-contextualized if the claim is legible (fluency > 1), and only annotate atomicity and faithfulness if the claim is also de-contextualized (so one is able to discern meaning from the claim).",
"This results in the following rules for acceptable 5 Fluency agreement is measured in terms of agreement percentage as most ratings are the same (3), thus any disagreements have an oversized influence on .",
"claims based on the definitions for the labels in each category: Fluency > 1 AND De-Contextualized = 1 AND Atomicity = 1 AND Faithfulness",
"> 3. An acceptable claim is thus legible, meaningful, represents a single aspect of a scientific entity or process, and accurately reflects the information presented in the original citance.",
"The results of claim quality annotation are given in Table",
"3. Note that these are on claims generated by CLAIMGEN-ENTITY and CLAIMGEN-BART (see examples in Table 4), and thus are only supports claims.",
"We first note that inter-annotator agreement is very high for fluency and moderate across all other criteria.",
"Generated claims are quite fluent across methods, with a small minority of instances being illegible.",
"Unsurprisingly, CLAIMGEN-BART improves over CLAIMGENENTITY across all categories except for atomicity.",
"This intuitively makes sense as CLAIMGENENTITY directly produces claims which are about a single entity.",
"CLAIMGEN-ENTITY yields a higher number of claims per citance as it generates one claim for every entity in the sentence, but the precision of acceptable claims is much lower than that of CLAIMGEN-BART.",
"Thus, there is a tradeoff between the two methods between the number of claims generated and their acceptability.",
"While higher yield could lead to higher coverage of claims in the original text, this study is left to future work.",
"Method Fluency De-Con.",
"(%) Atomic (%) Faithfulness # Gen # Accept P CLAIMGEN-ENTITY 2 .",
"51 55 .",
"63 85 .",
"28 3 .",
"54 893 111 12 .",
"43 CLAIMGEN-BART 2 .",
"74 84 .",
"35 80 .",
"65 4 .",
"15 156 69 44 .",
"23 (236 claims) 82.74 64.53 58.71 53.01 -Table 3: Average annotation score, agreement, and claim yield for each category.",
"SciFact.",
"We generate claims for each source citance s i in the SciFact dev split, and calculate the ROUGE score (Lin, 2004) between each generated claim c ( i ) j and each manually written claim d ( i ) k .",
"From this, we take an average of the max ROUGE score for each generated claim.",
"Formally, given | C | claims we calculate: score = 1 | C | (cid:88) i (cid:88) j max k ROUGE ( c ( i ) j , d ( i ) k ) Our evaluation results are given in Table 5.",
"Both methods produce claims which have high overlap with the reference claims, though claims generated directly using BART are significantly closer to the reference claims than those generated using CLAIMGEN-ENTITY .",
"Finally, we note the these scores are in the range of state-of-the-art models used for paraphrase generation, establishing a solid baseline for this task (Zhou and Bhat, 2021).",
"Finally, we perform a manual evaluation to compare KBIN against other methods of negation generation.",
"Annotators evaluate negations based on Fluency and Entailment.",
"We adopt the definitions used to annotate the SNLI corpus (Bowman et al., 2015), in which the annotator is given an original claim (premise) and a generated negation (hypoth-esis) and asked to select from among the following options, including a SKIP option for Fluency: 3 The hypothesis is DEFINITELY FALSE given the premise 2 The hypothesis MIGHT BE TRUE given the premise 1 The hypothesis is DEFINITELY TRUE given the premise SKIP The hypothesis contains a lot of grammatical errors and cannot be understood We compare KBIN to two baselines.",
"a random entity of the same type, similar to the method in Pan et al. (2021).",
"The second is the proposed negation generation method in Saakyan et al. (2021).",
"The method is based on extracting keywords using YAKE (Campos et al., 2020) (an unsupervised method based on statistical text features), replacing those keywords using text infilling with a pre-trained language model, and selecting the replacement with the highest contradiction score using a model pre-trained for NLI.",
"We generate negations for 100 claims using all three methods.",
"For annotation, generated negations from all three methods are aggregated and the order of negation method randomized for each of the 100 claims.",
"Example negations generated by all three methods are given in Table 6 and annotation results for fluency and entailment are given in Table 7.",
"First, KBIN produces more fluent claims than both baselines.",
"Additionally, KBIN produces more convincing negations on average than both baselines.",
"We observe that the most common operation performed by all three methods is to replace a noun phrase.",
"KBIN has the benefit of being able to replace many entity types corresponding to concepts found in UMLS, which also include verb phrases that encode relations.",
"Finally, KBIN improves over the baseline from Saakyan et al. (2021) by producing fewer claims which are directly entailed by the source claim, i.e., that maintain the original meaning and do not negate the original claim.",
"To give further insight into the quality of claims generated using our methods, we perform an experiment where we train and test models for scientific fact checking using claims only.",
"This claim-only experiment helps us assess whether the negation process introduces data artifacts that can be leveraged by the model to predict veracity.",
"We present results from training on claims generated using CLAIMGEN-BART and KBIN, compared against training on the original SciFact training data (which has manually written negations), along with random and majority baselines, in Figure",
"3. We observe that there are likely some dataset artifacts in the original SciFact claims that lead to model performance well above the majority and random baselines.",
"6 This phenomenon has been 6 It is difficult to fully separate the contributions of data artifacts and model performance in this setting, i.e., there is no situation which guarantees *no* undesirable data artifacts.",
"Performance ought to be better than a random baseline in this theoretical setting, due to the pretrained language model likely having had some exposure to the content of the claims during pretraining.",
"observed in general domain natural language inference datasets as well (Poliak et al., 2018).",
"Training on claims generated using our methods results in performance that is much more proximal to random performance on the SciFact dev set, indicating that the label-associated bias in the original training data is not present and a possible domain shift between the original SciFact claims and our generated claims.",
"This can further explain some of the performance gap we observe between zero-shot fact-checking and the upper bound of training on manually labeled training data (Table 1).",
"Scientific Fact Checking Our work follows a line of recent literature on scientific fact checking (Wadden et al., 2020).",
"The goal of this task is to determine the veracity of claims related to scientific topics by retrieving appropriate documents from scientific literature, finding evidentiary sentences from those documents, and determining whether claims are supported, refuted, or there is not enough evidence to make a judgement.",
"The task closely resembles the task of general domain fact-checking (Thorne et al., 2018; Augenstein et al., 2019).",
"Well-performing systems on this task use large language models to perform neural document retrieval (Pradeep et al., 2020) or multi-task learning of rationale prediction and stance prediction (Li et al., 2021; Wadden et al., 2021).",
"Recent work on general domain fact checking has also introduced methods for adversarial generation of claims which are particularly difficult to fact-check (Thorne et al., 2019; Atanasova et al., 2020), and for performing the task without any labeled data (Pan et al., 2021).",
"Our proposed methods extend zero-shot fact checking to the scientific domain, demonstrating that one can achieve 90% of the inference performance of state-of-the-art systems without domain-specific labeled data.",
"Generating Training Data Our work is also related to methods for the automatic generation of training data.",
"Generation of synthetic data has been used for multiple tasks, for example question answering (Duan et al., 2017; Riabi et al., 2021), knowledge-base completion (Safavi et al., 2021), and fact-checking (Pan et al., 2021).",
"Most similar to our setting, the COVID-Fact dataset (Saakyan et al., 2021) contains claims related to COVID-19 crawled from Reddit, and is constructed semiautomatically.",
"Claims which are supported by evidence are extracted from Reddit and verified by human annotators, while negations of these claims are generated automatically via masked language model infilling.",
"KBIN improves upon the negation method proposed in this work by leveraging in-domain structured knowledge via UMLS.",
"In this work, we propose the task of scientific claim generation, presenting CLAIMGEN-BART, CLAIMGEN-ENTITY , and KBIN to perform the task.",
"We demonstrate that generated claims can be used to train a model for zero-shot scientific fact checking and obtain within 90% of the performance of a model trained on human-written claims.",
"Through a rigorous user study we demonstrate that CLAIMGEN-BART produces higher quality claims than CLAIMGEN-ENTITY , and that KBIN produces more fluent and more convincing negations than previous work.",
"Work remains to improve claim generation quality and assess the impacts of generated claims in other domains of science, as well as how generated claims can be used in the evidence retrieval component of fact checking systems.",
"We hope that our methods will be used to facilitate future work by enabling faster creation of training datasets and improving the performance of models on the timely and important task of scientific fact checking.",
"This project is supported in part by the European Union's Horizon 2020 research and innovation programme under the Marie Skodowska-2456",
"Curie grant agreement No 801199, and by the United States National Science Foundation Grant OIA-2033558.",
"We thank Doug Downey, Hannaneh Hajishirzi, the reviewers, and members of the Semantic Scholar research team for their valuable feedback.",
"Automated scientific fact checking has great potential value to the scientific community, as well as for addressing phenomenon such as the propagation of scientific misinformation.",
"Our aim in releasing models for scientific claim generation is to improve the generalizability of science fact checking systems in domains with less training resources.",
"When training our fact checking models with generated or synthetic data, there are questions regarding the veracity of the generated data and whether a model trained on inferred labels could produce trustworthy judgments.",
"We hope that by introducing this task and models, we will enable the community to study such questions, while contributing to data curation in a domain in which such curation would normally require significant manual efforts and cost."
] | [
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"other",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"method",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"method",
"other",
"objective",
"objective",
"objective",
"objective",
"abstain",
"result",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"Fine-tuning the entire set of parameters of a large pretrained model has become the mainstream approach for transfer learning.",
"To increase its efficiency and prevent catastrophic forgetting and interference, techniques like adapters and sparse fine-tuning have been developed.",
"Adapters are modular , as they can be combined to adapt a model towards different facets of knowledge (e.g., dedicated language and/or task adapters).",
"Sparse fine-tuning is expressive , as it controls the behavior of all model components.",
"In this work, we introduce a new fine-tuning method with both these desirable properties.",
"In particular, we learn sparse, real-valued masks based on a simple variant of the Lottery Ticket Hypothesis.",
"Task-specific masks are obtained from annotated data in a source language, and language-specific masks from masked language modeling in a target language.",
"Both these masks can then be composed with the pretrained model.",
"Unlike adapter-based fine-tuning, this method neither increases the number of parameters at inference time nor alters the original model architecture.",
"Most importantly, it outperforms adapters in zero-shot cross-lingual transfer by a large margin in a series of multilingual benchmarks, including Universal Dependencies, MasakhaNER, and AmericasNLI.",
"Based on an in-depth analysis, we additionally find that sparsity is crucial to prevent both 1) interference between the fine-tunings to be composed and 2) overfitting.",
"We release the code and models at https://github.com/ cambridgeltl/composable-sft .",
"Fine-tuning of pretrained models (Howard and Ruder, 2018; Devlin et al., 2019, inter alia ) is arguably the dominant paradigm in NLP at present.",
"Originally, fine-tuning involved supervised learning of all the parameters of a model pretrained on unlabeled texts.",
"However, given the size of Transformer-based architectures, this approach is often timeand resourceinefficient, and may result in catastrophic forgetting and interference (Wang et al., 2020) during multiple adaptations.",
"To overcome these limitations, two main alternatives have emerged: 1) through adapters , new parameters can be added to a pretrained model in the form of extra intermediate layers (Rebuffi et al., 2017; Houlsby et al., 2019) and fine-tuned while keeping all the pretrained parameters fixed; 2) sparse fine-tuning (SFT) of a small subset of pretrained model parameters (Guo et al., 2021; Zaken et al., 2021; Xu et al., 2021b, inter alia ).",
"Adapters have proven especially useful in multilingual NLP (Bapna and Firat, 2019; stn et al., 2020; Pfeiffer et al., 2020b; Vidoni et al., 2020; Pfeiffer et al., 2021b; Ansell et al., 2021) because they exhibit a surprising degree of modularity .",
"This ability to disentangle and recombine orthogonal facets of knowledge in original ways (Ponti et al., 2021; Ponti, 2021) allows for separately learning a task adapter from labeled data in a source language and dedicated language adapters from unlabeled data in the source language and target languages.",
"By stacking these components, it is possible to perform zero-shot cross-lingual transfer.",
"Compared to sequentially fine-tuning the full model on both the task and target language, this yields superior performance and efficiency (Pfeiffer et al., 2020b).",
"Notably, achieving coverage over NT tasks in NL target languages with the sequential approach requires NTNL models to be trained, whereas the modularity of adapters reduces this to NT + NL .",
"Meanwhile, the advantage of SFTs over adapters is their expressivity : rather than a non-linear transformation of the output of Transformer layers (e.g., using a shallow MLP as with adapters), they can operate directly on a pretrained model's embedding and attention layers.",
"It therefore seems natural to search for a parameter-efficient fine-tuning method that is both modular and expressive.",
"To this end, we propose Lottery Ticket Sparse Fine-Tuning (LT-SFT), a simple and general-purpose adaptation technique inspired by the Lottery Ticket Hypothesis (LTH; Frankle and Carbin, 2019; Malach et al., 2020), which was originally conceived for pruning large neural networks.",
"In particular, after fine-tuning a pretrained model for a specific task or language, we select the subset of parameters that change the most.",
"Then, we rewind the model to its pretrained initialization (without setting any value to zero, contrary to the original LTH algorithm).",
"By re-tuning again only the selected subset of parameters, we obtain a sparse fine-tuning in the form of a vector of differences with respect to the pretrained model.",
"Multiple SFTs can be composed by simply summing them with the pretrained model.",
"We provide a graphical representation of our method in Figure 1.",
"We benchmark LT-SFT on a series of multilingual datasets, including Universal Dependencies (Zeman et al., 2020) for part-of-speech tagging and dependency parsing, MasakhaNER (Adelani et al., 2021) for named entity recognition, and AmericasNLI (Ebrahimi et al., 2021) for natural language inference.",
"We evaluate it in a zero-shot cross-lingual transfer setting on 35 typologically and geographically diverse languages that include both languages seen and unseen during masked language modeling of the pretrained model.",
"The results in all transfer tasks indicate that LT-SFT consistently achieves substantial gains over the current state-of-the-art adapter-based method for cross-lingual transfer, MAD-X (Pfeiffer et al., 2020b).",
"In addition to its superior performance, modularity, and expressivity, LT-SFT offers a series of additional advantages over adapters: 1) the number of parameters remains constant, which prevents the decrease in inference speed observed when adapter layers are added; 2) the neural architecture remains identical to the pretrained model, which makes code development model-independent rather than requiring special modifications for each possible architecture (Pfeiffer et al., 2020a).",
"Finally, 3) we empirically demonstrate that the peak in performance for LT-SFT is consistently found with the same percentage of tunable parameters, whereas the best reduction factor for MAD-X is task-dependent.",
"This makes our method more robust to the choice of hyper-parameters.",
"In addition, we find that a high level of sparsity in language and task fine-tunings is beneficial to performance, as this makes overlaps less likely and poses a lower risk of creating interference between the knowledge they contain.",
"Moreover, it makes fine-tunings less prone to overfitting due to their constrained capacity.",
"Thus, sparsity is a fundamental ingredient for achieving modularity and composability.",
"These properties in turn allow for systematic generalization to new combinations of tasks and languages in a zero-shot fashion.",
"To establish a broader context for our research, we first provide a succinct overview of current methods for efficient fine-tuning, such as adapters and SFT.",
"We then recapitulate the Lottery Ticket Hypothesis, 1779 upon which our newly proposed method is built.",
"Adapters and Composition.",
"An adapter is a component inserted into a Transformer model with the purpose of specializing it for a particular language, task, domain, or modality (Houlsby et al., 2019).",
"Previous work in multilingual NLP has mainly adopted the lightweight yet effective adapter variant of Pfeiffer et al. (2021a).",
"In this setup, only one adapter module, consisting of a successive down-projection and up-projection, is injected per Transformer layer, after the feed-forward sub-layer.",
"The adapter A b at the b -th Transformer layer performs the following operation: A b ( h b , r b ) = U b a ( D b h b ) + r b .",
"h b and r b are the Transformer hidden state and the residual at layer b , respectively.",
"D b R m h and U b R h m are the downand up-projections, respectively ( h being the Transformer's hidden layer size, and m the adapter's dimension), and a ( ) is a non-linear activation function.",
"The residual connection r b is the output of the Transformer's feed-forward layer whereas h b is the output of the subsequent layer normalization.",
"During fine-tuning of a pretrained model with adapters, only the adapter parameters U and D are modified while the pretrained model's parameters are kept fixed.",
"In the MAD-X adapter composition framework for cross-lingual transfer (Pfeiffer et al., 2020b), a language adapter (LA) for a massively multilingual Transformer (MMT) is learned for each source and target language through masked language modeling (MLM), and a task adapter (TA) is learned for each target task, where the LA for the source language is inserted during TA training.",
"At inference time, the task adapter and target language adapter are composed by stacking one on top of the other.",
"This adapter composition approach has been shown to be highly effective for cross-lingual transfer (Pfeiffer et al., 2020b, 2021b; Ansell et al., 2021), especially for low-resource languages and target languages unseen during MMT pretraining.",
"Sparse Fine-Tuning.",
"We call F (cid:48) = F ( ; + ) a sparse fine-tuning (SFT) of a pretrained neural model F ( ; ) if is sparse.",
"We sometimes refer to itself as an SFT, or as the SFT's difference vector .",
"Previously proposed SFT methods include DiffPruning (Guo et al., 2021), BitFit (Zaken et al., 2021) and ChildTuning (Xu et al., 2021b).",
"DiffPruning simulates sparsity of the difference vector during training by applying a continuous relaxation of a binary mask to it.",
"BitFit on the other hand allows non-zero differences only for bias parameters.",
"ChildTuning selects a subset of fine-tunable parameters by using Fisher information to measure the relevance of each parameter to the task.",
"These methods have been shown to be competitive with full fine-tuning on GLUE (Wang et al., 2019), despite the difference vector having fewer than 0.5% non-zero values.",
"Lottery Ticket Hypothesis.",
"(LTH; Frankle and Carbin, 2019; Malach et al., 2020) states that each neural model contains a sub-network (a winning ticket) that, if trained again in isolation, can match or even exceed the performance of the original model.",
"To achieve this, after a pruning stage where some parameters are zero-masked and frozen according to some criterion (e.g., weight magnitude), the remaining parameters are restored to their original values and then re-tuned.",
"This process of pruning and re-training can be iterated multiple times.",
"The LTH has so far been used mostly for model compression through network pruning; to our knowledge, we are the first to use it for pretrained model adaptation .",
"Multi-Source Task Training.",
"Ansell et al. (2021) showed that training task adapters using data from multiple source languages can result in sizable improvements in downstream zero-shot transfer performance even when the total number of training examples is held constant.",
"In their training setup, each batch consisted of examples from a single, randomly selected source language, the language adapter for which is activated for the duration of the training step.",
"Training.",
"In this work, we propose Lottery Ticket Sparse Fine-Tuning (LT-SFT).",
"Similar to the Lottery Ticket algorithm of Frankle and Carbin (2019), our LT-SFT method consists of two phases: (Phase 1) Pretrained model parameters (0) are fully fine-tuned on the target language or task data D , yielding (1) .",
"Parameters are ranked according to some criterion, in our case greatest absolute difference | (1) i (0) i | , and the top K are selected for tuning in the next phase: a binary mask is set to have 1 in positions corresponding to these parameters, and 0 elsewhere.",
"(Phase 2)",
"After resetting the parameters to their 1780 original values (0) , the model is again fine-tuned, but this time only the K selected parameters are trainable whereas the others are kept frozen.",
"In practice, we implement this by passing the masked gradient (cid:12) L ( F ( ; ) , D ) (where (cid:12) denotes element-wise multiplication and L a loss function) to the optimizer at each step.",
"From the resulting fine-tuned parameters (2) we can obtain the sparse vector of differences = (2) (0) .",
"In addition, we experiment with applying a regularization term which discourages parameters from deviating from their pretrained values (0) .",
"Specifically, we use L1 regularization of the form J ( ) = N (cid:80) i | i (0) i | .",
"Composition.",
"Although we often use the term sparse fine-tuning to refer to the difference vector itself, an SFT is most accurately conceptualized as a functional which takes as its argument a parameterized function and returns a new function, where some sparse difference vector has been added to the original parameter vector.",
"Suppose we have a language SFTSL and a task SFTST defined by SL ( F ( ; )) = F ( ; + L ) ST ( F ( ; )) = F ( ; + T ) .",
"We adopt a similar cross-lingual transfer setup to MAD-X (Pfeiffer et al., 2020b, see also 2).",
"We start with an MMTF with pretrained parameters learned through masked language modeling on many languages, such as mBERT (Devlin et al., 2019) or XLM-R (Conneau et al., 2020).",
"For each language of interest l , we learn a language SFT ( l ) L through LT-SFT (also with an MLM objective) on text from language l .",
"For each task of interest t , we learn a task SFT ( t ) T through LT-SFT on annotated data from some source language s .",
"When learning the task SFT, we first adapt to the source language by applying the language SFT for s .",
"1 The language SFT is removed again after training.",
"That is, we perform 1 Adapting to the source language yields substantial improvements in cross-lingual transfer performance with both MAD-X and LT-SFT, with gains of 2-3 points in our preliminary experiments.",
"Paradoxically, our results (see Table 7) and results from previous work (Pfeiffer et al., 2020b; Ansell et al., 2021) suggest that adapting to high-resource target languages at inference time does not give similarly large benefits.",
"We think this phenomenon warrants further investigation.",
"LT-SFT on F ( ; + ( s ) L ) to obtain fine-tuned parameter vector (cid:48) .",
"We then calculate ( t ) T = (cid:48) ( + ( s ) L ) .",
"Note that during task training, we also learn a classifier head, which is fully fine-tuned during both phases of LT-SFT adaptation, with the same random initialization applied at the beginning of each phase.",
"We perform zero-shot adaptation of F to target language l for task t by composing language and task SFTs to obtain F t,l = F ( ; + ( t ) T + ( l ) L ) .",
"On top of this, we stack the classifier head learned for t .",
"For a formal algorithm of LT-SFT and the transfer procedure, we refer to Appendix A. 4 Experimental Setup To evaluate our new method extensively, we benchmark its zero-shot cross-lingual performance on four distinct tasks: part-of-speech tagging (POS), dependency parsing (DP), named entity recognition (NER), and natural language inference (NLI).",
"Table 1 summarizes our experimental setup, including the datasets and languages considered in our experiments.",
"We put emphasis on low-resource languages and languages unseen during MMT pretraining, although we also evaluate on a few high-resource languages.",
"In total, we cover a set of 35 typologically and geographically diverse languages, which makes them representative of cross-lingual variation (Ponti et al., 2019, 2020).",
"The main baseline is MAD-X, the state-of-the-art adapter-based framework for cross-lingual transfer (Pfeiffer et al., 2020b).",
"We use the MAD-X 2.0 variant, where the last adapter layers are dropped.",
"Pfeiffer et al. (2021b) found that this improved performance, which we could confirm in our preliminary experiments.",
"Since adapters with the configuration used by Pfeiffer et al. (2020b) are unavailable for many languages in our evaluation, we train our own for all languages.",
"In Appendix D we also provide an evaluation with comparable language adapters from AdapterHub (Pfeiffer et al., 2020a) where available.",
"We also perform experiments with BITFIT (Za-ken et al., 2021) to establish a baseline for an existing SFT technique.",
"In addition to the main LT-SFT model variant, on POS and DP we test a RANDSFT variant as an ablation, where the K parameters to be fine-tuned are selected at random rather than based on an informed criterion.",
"MLM Training Data.",
"For all languages in our POS and DP evaluation, we perform MLM language SFT/adapter training on Wikipedia corpora.",
"We also use Wikipedia for all languages in our NER evaluation if available.",
"Where this is not the case, we use the Luo News Dataset (Adelani et al., 2021) for Luo and the JW300 corpus (Agic and Vulic, 2019) for Nigerian Pidgin.",
"The main corpora for the languages in our NLI evaluation are those used by the dataset creators to train their baseline models (Ebrahimi et al., 2021); however, since the sizes of these corpora are restricted due to containing only parallel data, we augment them with data from Wikipedia and the corpora of indigenous Peruvian languages of Bustamante et al. (2020) where available.",
"More details on data sources are provided in Appendix B. Training Setup and Hyper-parameters.",
"For both SFTs and adapters, we train for the lesser of 100 epochs or 100,000 steps of batch size 8 and maximum sequence length 256, subject to an absolute minimum of 30,000 steps since 100 epochs seemed insufficient for some languages with very small corpora.",
"Model checkpoints are evaluated every 1,000 steps (5,000 for high-resource languages) on a held-out set of 5% of the corpus (1% for high-resource languages), and the one with the smallest loss is selected at the end of training.",
"We use the AdamW optimizer (Loshchilov and Hutter, 2019) with an initial learning rate of 5 e -5 which is linearly reduced to 0 over the course of training.",
"Following Pfeiffer et al. (2020b), the reduction factor (i.e., the ratio between model hidden size and adapter size) for the adapter baseline was set to 2 for a total of 7.6M trainable parameters.",
"For comparability, we set the same number of trainable parameters K for our language LT-SFTs.",
"This results in language SFTs with a sparsity of 4.3% for mBERT and 2.8% for XLM-R.",
"Since BITFIT tunes exclusively the bias parameters, its language SFTs have a fixed sparsity of 0.047% for mBERT and 0.030% for XLM-R.",
"Importantly, during language sparse fine-tuning, we decouple the input and output embedding matrices and fix the parameters of the output matrix; otherwise, we find that the vast majority of the K most changed parameters during full fine-tuning belong to the embedding matrix, seemingly due to its proximity to the model output, which damages downstream performance.",
"We also fix the layer normalization parameters; all other parameters are trainable.",
"For language adaptation, we apply L1 regularization as described in 3.1 with = 0 .",
"1 .",
"Note that the specified training regime is applied in the same way during both phases of LT-SFT.",
"For language adapter training in the MAD-X baseline, we use the Pfeiffer configuration (Pfeiffer et al., 2021a) with invertible adapters, special additional sub-components designed for adapting to the vocabulary of the target language, which yields consistent gains.",
"For POS tagging, DP, and NER, 2 we train task SFTs/adapters on the datasets indicated in Table 1 for 10 epochs with batch size 8, except during the first phase of LT-SFT training where we train for only 3 epochs.",
"3 Model checkpoints are evaluated on the validation set every 250 steps, and the best checkpoint is taken at the end of training, with the selection metric being accuracy for POS, labeled attachment score for DP, and F1-score for NER.",
"Similarly to language fine-tuning, we use an initial learning rate of 5 e -5 which is linearly reduced to 0 over the course of training.",
"For POS and NER we use the standard token-level single-layer multi-class model head.",
"For DP, we use the shallow variant (Glava and Vulic, 2021) of the biaffine dependency parser of Dozat and Manning (2017).",
"For NLI, we employ the same fine-tuning hyper-parameters as Ebrahimi et al. (2021): 5 epochs with batch size 32, with checkpoint evaluation on the validation set every 625 steps, and an initial learning rate of 2 e -5.",
"We apply a two-layer multi-class clas-sification head atop the MMT output corresponding to the [CLS] token.",
"We found that the number of trainable parameters during task adaptation (governed by K for SFTs and reduction factor for adapters) has a large effect on performance: we thus experiment with a range of values.",
"Specifically, we test adapter reduction factors of 32, 16, 8, 4, 2, and 1, and equivalent values of K 4 for SFT.",
"To validate that task LT-SFT training, like task adapter training in prior work (Ansell et al., 2021), benefits from the presence of multiple source languages in the training data, and to push the boundaries of zero-shot cross lingual transfer, we perform multi-source training experiments on DP and NLI.",
"2 MasakhaNER and CoNLL 2003 datasets respectively use the DATE and MISC tags which are not used by the other; we replace these with the O tag at both train and test time.",
"3 This is because full fine-tuning is more prone to overfitting than sparse/adapter fine-tuning.",
"Early stopping somewhat addresses overfitting, but it is insufficient in a cross-lingual setting because the target language performance generally starts to deteriorate faster than the source language performance.",
"4 Approximately 442K, 884K, 1.7M, 3.5M, 7.1M, and 14.2M respectively, amounting to sparsity levels of 0.25%, 0.50%, 1.0%, 2.0%, 4.0% and 8.0% for mBERT and 0.16%, 0.32%, 0.63%, 1.3%, 2.6% and 5.1% for XLM-R.",
"We adopt a similar setup to Ansell et al. (2021): we obtain the training set by concatenating the training data for all source languages.",
"We randomly shuffle the training set and train as in the single-source case, except that each batch is composed of examples from a single source language, whose language SFT is applied during the training step.",
"We prioritize maximizing performance rather than providing a fair comparison against the single-source case, so unlike Ansell et al. (2021), we use the entirety of the training sets.",
"In derogation of this principle, we set a maximum of 15K examples per language for DP to better balance our sample.",
"For DP, we train our models on the UD treebanks of 11 diverse high-resource languages.",
"For NLI, we train on MultiNLI (Williams et al., 2018) plus the data for all 14 non-English languages in the XNLI dataset (Conneau et al., 2018).",
"We also evaluate multi-source task SFT training on extractive question answering (QA), as a comparatively generous amount of multilingual data is available for this task.",
"Specifically, we train on English data from SQuAD version 1 (Rajpurkar et al., 2016), all languages from MLQA (Lewis et al., 2020), and those languages from XQuAD (Artetxe et al., 2020) which also appear in MLQA.",
"We evaluate on the languages present in XQuAD but not in MLQA.",
"For QA, we train for 5 epochs with batch size 12 and initial learning rate 3 e -5.",
"Full details of the source languages can be found in Appendix B. We use an equivalent reduction factor of 1 for all tasks, following the strongest setting from our single-source experiments.",
"Except as stated above, the training configuration and hyper-parameters are the same as for single-source training.",
"We report the average test performance of zero-shot cross-lingual transfer for the best reduction factor (or equivalent K ) in Table 2.",
"Some patterns emerge across all four tasks: first, LT-SFT consistently outperforms all the baselines.",
"In particular, it surpasses the state-of-the-art MAD-X across all tasks, with gains of 2.5 accuracy in part-of-speech tagging, 2.5 UAS and 3.7 LAS in dependency parsing, 1.8 F1 score in named entity recognition, and 1.9 accuracy in natural language inference.",
"Compared to RAND-SFT, its superior performance demonstrates the importance of selecting winning tickets rather than a random subset 1783 POS DP NER NLI Accuracy UAS LAS F1 score Accuracy LT-SFT 71.1 (1) 57.1 (1) 37.8 (1) 71.7 (1) 51.4 (1) RAND-SFT 69.2 (1) 54.3 (1) 33.9 (1) -MAD-X 68.6 (16) 54.6 (2) 34.1 (1) 69.9 (8) 49.5 (2) BITFIT 58.1 45.7 23.9 54.9 38.3 LT-SFT TA-ONLY 51.3 (32) 39.1 (1) 19.9 (1) 55.3 (8) 39.9 (4) MAD-X TA-ONLY 52.1 (32) 38.9 (1) 19.5 (1) 52.4 (32) 41.7 (4) Table 2: Results of zero-shot cross-lingual transfer evaluation averaged over all languages when best equivalent reduction factor (shown in parentheses after each result) is chosen.",
"of parameters.",
"Secondly, the results demonstrate the importance of language SFTs/adapters for specializing pretrained models to unseen languages, as they bring about a large increase in performance across the 4 tasks compared to the corresponding settings with task adaptation only (TA-ONLY ).",
"We remark that LT-SFT's zero-shot performance also exceeds translation-based baselines on the AmericasNLI task, achieving an average accuracy of 51.4%, compared with the 48.7% of the translate-train' baseline of Ebrahimi et al. (2021).",
"In Figure 2, we provide a more detailed overview of average cross-lingual model performance across a range of different reduction factors.",
"The results for the LT-SFT and RAND-SFT methods generally improve or stay steady as the number of trainable task parameters increases.",
"On the contrary, there is not such a trend for MAD-X, as lower reduction factors may degrade its results.",
"This makes it easier to choose a good setting for this hyper-parameter when using SFT.",
"Moreover, it is worth stressing again that, contrary to MAD-X, this hyper-parameter does not affect inference time.",
"BITFIT performs much worse than the other methods which perform language adaptation across all tasks.",
"Bearing in mind the strong trend towards increasing performance with increasing K for the other SFT methods, it seems likely that BITFIT , with two orders of magnitude fewer trainable parameters, lacks the capacity to learn effective task 1784 el ro ru th tr XLM-R Base, full FT 71.1/54.3 78.3/63.7 74.1/57.8 67.1/55.7 67.5/51.1 XLM-R Large, full FT (Artetxe et al., 2020) 79.8/61.7 83.6/69.7 80.1/64.3 74.2/62.8 75.9 / 59.3 XLM-R Base MS, LT-SFT 81.9 / 65.5 86.3 / 73.3 81.4 / 64.6 82.4 / 75.2 75.2/58.6 Table 3: Results of zero-shot cross-lingual transfer evaluation on XQuAD (Artetxe et al., 2020), restricted to languages which do not appear in MLQA (Lewis et al., 2020) (see 4.4) in the format F1/exact match score.",
"For additional results at the level of individual languages and an analysis of the efficacy of language adaptation for highversus lowresource target languages, we refer the reader to Appendix C. 5.1 Multi-Source Training As shown in Table 4, multi-source LT-SFT training brings about a large improvement in zero-shot cross-lingual transfer performance on DP, and a modest improvement for NLI.",
"This may be a result of the fact that the training set for NLI contains a relatively small number of non-English examples compared to the DP training set.",
"Also, the AmericasNLI target languages generally have a lower degree of genealogical relatedness to the source languages compared to the DP target languages.",
"Table 3 demonstrates that multi-source training is also beneficial to zero-shot cross-lingual transfer for QA on a series of relatively high-resource languages.",
"In particular, LT-SFT multi-source training of XLM-R Base outperforms single-source full fine-tuning of XLM-R Large (a larger model) comfortably, and outperforms XLM-R Base single-source full fine-tuning by a significant margin.",
"The fact that such an improvement occurs despite each of the 6 non-English source languages having more than an order of magnitude less training data than the English data from SQuAD illustrates the disproportionate advantage of multilingual source data.",
"Finally, we address the following question: is sparsity responsible for preventing the interference of separate fine-tunings when they are composed?",
"To support this hypothesis with empirical evidence, we use LT-SFT to train language 5 and task fine-tunings with different levels of density, i.e. the percentage of non-zero values (from 5% to 100%).",
"We then evaluate all possible combinations of density levels.",
"The results are visualized in the form of a contour plot in Figure 3 for selected combinations of tasks and languages: Buryat, Cantonese, Erzya, Maltese, and Upper Sorbian for DP, and Hausa, Igbo, Luganda, Swahili and Wolof for NER.",
"From Figure 3, it emerges that the performance decreases markedly for SFTs with a density level greater than ~30% of fine-tuned parameters.",
"6 We speculate that this is due to the fact that sparser fine-tunings have a lower risk of overlapping with each other, thus creating interference between the different facets of knowledge they encapsulate.",
"It must be noted, however, that alternative hypotheses could explain the performance degradation in addition to parameter overlap, such as overfitting as a result of excessive capacity.",
"While we leave the search for conclusive evidence to future work, both of these hypotheses illustrate why enforcing sparsity in adaptation, as we propose in our method, is crucial to achieving modularity.",
"Within the framework of the Lottery Ticket Hypothesis, a series of improvements have been suggested to make the original algorithm to find winning tickets (Frankle and Carbin, 2019) more stable: after fine-tuning, Frankle et al. (2019) rewind the parameters to their values after a few iterations rather than their values before training, whereas Renda et al. (2020) also rewind the learning rate.",
"In addition, Zhou et al. (2019) found that 1) different criteria can be used to select weights as an alternative to the magnitude of their change; 2) different rewinding methods are also effective, such as restoring the original sign, but not the value.",
"In future work, we will investigate whether these variants also benefit our method for cross-lingual transfer, where the LTH is used for adaptation rather than pruning.",
"Whereas the LTH was originally conceived in the vision domain for convolutional architectures, it is also effective for pruning models trained on NLP tasks (Yu et al., 2020), such as neural machine translation, and based on Transformer architectures (Prasanna et al., 2020).",
"Recently, Xu et al. (2021a) adapted the LTH specifically to prune pretrained models after fine-tuning.",
"To the best of our knowledge, Wortsman et al. (2020) is the only instance where winning tickets were composed in previous work.",
"In their experiment, a set of task-specific masks were linearly combined at inference time, in order to generalize to new tasks in a continuous learning setting.",
"6 Note, furthermore, that levels of task fine-tuning density greater than ~60% do not vary in performance.",
"This is because their subsets of parameters include embeddings of tokens never encountered during task training, which are therefore never updated even if trainable.",
"We have presented a new method to fine-tune pretrained models that is both modular (like adapters) and expressive (like sparse fine-tuning).",
"This method is based on a variant of the algorithm to find winning tickets under the framework of the Lottery Ticket Hypothesis.",
"We infer a sparse vector of differences with respect to the original model for each individual language (by modeling unlabeled text) and each individual task (with supervised learning).",
"The adaptations for a language and a task can then be composed with the pretrained model to enable zero-shot cross-lingual transfer.",
"Comparing our method with the state-of-the-art baseline in several multilingual tasks, the results have indicated substantial gains across the board in both languages seen and unseen during pretraining (which includes many truly low-resource languages).",
"In future work, our method offers several potential extensions.",
"In addition to the variants to the Lottery Ticket algorithm surveyed in 6, given the importance of sparsity for modularity (5.2), we plan to experiment with additional algorithms previously applied to pruning that can identify and fine-tune a subset of the model parameters, such as DiffPruning (Guo et al., 2021) and ChildTuning (Xu et al., 2021b).",
"Finally, given its simplicity and generality, our method is suited for many other applications of transfer learning in addition to cross-lingual transfer, such as multimodal learning, debiasing, and domain adaptation.",
"The code and models are available online at https: //github.com/cambridgeltl/composable-sft .",
"(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70) Alan wishes to thank David and Claudia Harding for their generous support via the Harding Distinguished Postgraduate Scholarship Programme.",
"Anna and Ivan are supported by the ERC PoC Grant MultiConvAI (no. 957356) and a Huawei research donation.",
"We would like to thank Chiara Ponti for the graphic illustration.",
"We also thank the anonymous reviewers for their helpful suggestions."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"result",
"abstain",
"other",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"Generative feature matching network (GFMN) is an approach for training implicit generative models for images by performing moment matching on features from pre-trained neural networks.",
"In this paper, we present new GFMN formulations that are effective for sequential data.",
"Our experimental results show the effectiveness of the proposed method, SeqGFMN, for three distinct generation tasks in English: unconditional text generation, class-conditional text generation, and unsupervised text style transfer.",
"SeqGFMN is stable to train and outperforms various adversarial approaches for text generation and text style transfer.",
"Generative feature matching networks (GFMNs) (dos Santos et al., 2019) has been recently proposed for learning implicit generative models by performing moment matching on features from pre-trained neural networks.",
"This approach demonstrated that GFMN could produce state-of-the-art image generators while avoiding instabilities associated with adversarial learning.",
"Similarly to training generative adversarial networks (GANs) (Goodfellow et al., 2014), GFMN training requires to backpropa-gate through the generated data to update the model parameters.",
"This backpropagation through the generated data, combined with adversarial learning instabilities, has proven to be a compelling challenge when applying GANs for discrete data such as text.",
"However, it remains unknown if this is also an issue for feature matching networks since the effectiveness of GFMN for sequential discrete data has not yet been studied.",
"first contribution , we propose a new formulation of GFMN for unconditional sequence generation, which we name Sequence-GFMN or SeqGFMN for short, by performing token level feature matching.",
"SeqGFMN has a stable training because it does not concurrently train a discriminator, which in principle could easily learn to distinguish between one-hot and soft one-hot representations.",
"As a result, we can use soft one-hot representations that the generator outputs during training without using the Gumbel softmax or REINFORCE algorithm as needed in GANs for text.",
"Additionally, different from GANs (Zhu et al., 2018), SeqGFMN can produce meaningful text without the need of pre-training the generator with maximum likelihood estimation (MLE).",
"We perform experiments using Bidirectional Encoder Representations from Transformers (BERT), GloVe, and FastText as our feature extractor networks.",
"We use two different corpora, and assess both the quality and diversity of the generated texts with three different quantitative metrics: BLEU, Self-BLEU and Frechet Infersent Distance (FID).",
"Additionally, we show that the latent space induced by SeqGFMN contains semantic and syntactic structure, as evidenced by interpolations in the z space.",
"Our second contribution consists in proposing a new strategy for class-conditional generation with GFMN.",
"The key idea here is to perform class-wise feature matching.",
"We apply SeqGFMN to perform sentiment-based conditional generation using the Yelp Reviews dataset, and assess its performance using classification accuracy, BLEU, and Self-BLEU.",
"Finally, as a third contribution , we demonstrate that the feature matching loss is an effective approach to perform distribution matching in the context of unsupervised text style transfer (UTST).",
"Most previous work on UTST adapts the autoencoder framework by adding an additional loss term: adversarial loss or back-translation loss.",
"Our method consists in replacing the adversarial and back-translation loss with style-wise feature matching.",
"Our experimental results indicate that the feature matching loss produces better results than the traditionally used losses.",
"Let G be a sequence generator implemented as a neural network with parameters , and let E be a pretrained NLP feature extractor network with L hidden layers, that produces features at token-level for each token in a sequence of length T .",
"The method consists of training G by minimizing the following token-level feature matching loss function: min T (cid:88) t =1 M (cid:88) j =1 || j,tp data j,tp G ( ) || 2 + || j,tp data j,tp G ( ) || 2 (1) where: j,tp data = E x p data E j,t ( x ) R d j , j,tp G ( ) = E z N (0 ,I nz ) E j,t ( G ( z ; )) R d j , j,tp data ,(cid:96) = E x p data E j,(cid:96),t ( x ) 2 [ j,(cid:96),tp data ] 2 , j,tp G ,(cid:96) ( ) = E z N (0 ,I nz ) E j,(cid:96),t ( G ( z ; )) 2 [ j,(cid:96),tp G ] 2 , (cid:96) = 1 . . . d j , where || .",
"|| 2 is the L 2 loss; x is a real data point sampled from the data distribution p data ; z R n z is a noise vector sampled from the normal distribution N (0 , I n z ) ; E j,t ( x ) denotes the token-level t feature map at a hidden layer j from E ; M L is the number of hidden layers used to perform feature matching; T is the maximum sequence length; and 2 p data and 2 p G are the variances of the features for real data and generated data respectively.",
"Note that this loss function is quite different from both the MLE loss used in regular language models and the adversarial loss used in GANs.",
"In order to train G , we first precompute j,tp data and j,tp data ,(cid:96) on the entire training data.",
"During training, we generate a minibatch of fake data by passing the Gaussian noise vector through the generator.",
"The fixed feature extractor E is used to extract features on the output of the generator at a per-token level.",
"The loss is then computed, as mentioned in Eq.",
"1.",
"The parameters of the generator G are optimized using stochastic gradient descent.",
"Note that the network E is used for feature extraction only and is kept fixed during the training of G .",
"Similar to (dos Santos et al., 2019), we use ADAM moving average, which allows us to use small minibatch sizes.",
"Fig. 1 illustrates SeqGFMN training; note that we use mean matching only for brevity, in practice we match both mean and diagonal covariance.",
"In our SeqGFMN framework, the output of the generator G is a sequence x of soft one-hot representations , { w 1 , w 2 , ..., w T } , where each element w i consists in the output of the softmax function at token i .",
"In the feature extractor E , these soft one-hot representations are multiplied by an embedding matrix to generate soft embeddings , which are then fed to the following layers of E .",
"Conditional generation is motivated by the assump-tion that if the training data can be clustered into distinct and meaningful classes, knowledge of such classes at training time would improve the overall performance of the model.",
"For class-based text generation, some datasets provide such opportunity by labeling the training data with relevant classes (e.g., positive/negative sentiment for Yelp Reviews dataset), information that can be leveraged by our model to condition the generation.",
"For this to be effective, the extracted features used for SeqGFMN need to be sufficiently representative of the text generated yet still be different between classes.",
"To account for the knowledge of latent classes, we extend the loss from Eq.1 for the case of two distinct classes: min T (cid:88) t =1 M (cid:88) j =1 || j,tc =0 || 2 + || j,tc =0 || 2 + || j,tc =1 || 2 + || j,t c =1 || 2 (2) where j,tc = j,tp cdata j,tp cG ( ) and j,tc = j,tp cdata j,tp cG ( ) follows the same definition for means and variances as Eq.1, with the exception that they are now class-dependent.",
"Given a class c , we allow for conditional generation by conditioning the noise vector z on c .",
"Indeed, if z N (0 , I n z ) , applying a class dependent linear transformation z c = A c z + b c will change the noise distribution such that z c N (cid:0) b c , A (cid:62) c A c (cid:1) .",
"A c and b c are learned at training time so to minimize our loss.",
"This enables the model to effectively sample z 1 z N GeneratorNN x 1 x N Feature extractor NN E 1 , 1 ( x 1 ) E M,T ( x 1 ) E 1 , 1 ( x N ) E M,T ( x N ) L = P Tt =1 P Mj =1 || j,tp data 1 NP Ni =1 E j,t ( x i ) || 2 Figure 1: For each training iteration, Generator ( G ) outputs N sentences from noise signals z 1 z N .",
"a new input noise from distinct distributions, conditioned on the class c .",
"Since the model can update the linear transformation parameters A c and b c to minimize its loss, the model can learn transformations that separate or disentangle between the different classes c naturally.",
"For example, conditioning on sentiment where c =0 is the negative sentiment class and c = 1 the positive class, amounts simply to learning two transformations ( A 0 , b 0 ) and ( A 1 , b 1 ).",
"This approach can be extended beyond learning linear transformations to allow for deep neural network to be employed.",
"During training, a minibatch is composed of input noise samples conditioned on class c .",
"Within our generator, we use a conditional batch normalization (condBN) from (Dumoulin et al., 2016).",
"The conditional BN is a 2-stage process: First, we perform a standard BN of a minibatch regardless of c where y i = BN , ( x i ) , using notations from (Ioffe and Szegedy, 2015).",
"Then y i enters a second stage where w i = c y i + c brings class dependency on c as proposed in (Du-moulin et al., 2016).",
"This allows for the influence of class conditioning to carry over the whole model where conditional BN is used.",
"Our models can have three distinct configurations: conditional input noise, conditional BN, or both conditional input noise and conditional BN.",
"Text style transfer consists of rewriting a sentence from a given style s i (e.g., informal) into a different style s j (e.g., formal) while maintaining the content and keeping the sentence fluent.",
"The major challenge for this task is the lack of parallel data, and many recent approaches adapt the encoder-decoder framework to work with non-parallel data (Shen et al., 2017; Fu et al., 2018).",
"This adaptation normally consists in using: (1) the reconstruction loss in an autoencoding fashion, which is intended to learn a conditional language model (decoder D ) while providing content preservation; together with (2) a classification loss produced by a style classifier C , which is intended to guarantee the correct transfer.",
"Balancing these two losses while generating good quality sentences is difficult, and several approaches such as adversarial discriminators (Shen et al., 2017) and cycle-consistency loss (Melnyk et al., 2017) have been employed in recent works.",
"Here, we use feature matching as a way to alleviate this problem.",
"Essentially, our unsupervised text style transfer approach is an encoder-decoder trained with the following three losses: Reconstruction loss: Given an input sentence x s i from set X and its decoded sentence x s i = D ( E ( x s i ) , s i ) (decoded in the same input style s i ), the reconstruction loss measures how well the decoder D is able to reconstruct it: L rec = E x si X [ log p D ( x s i | E ( x s i ) , s i )] .",
"|",
"(4) where X is the set of style transferred sentences generated by the current model.",
"For the classifier, the first term provides supervised signal regarding style classification and the second term gives additional training signal from the transferred data, enabling the classifier to be trained in a semi-supervised regime.",
"For the encoder-decoder the second term gives feedback on the current gener-ator's effectiveness on transferring sentences to a different style.",
"Feature Matching loss: It is computed in a similar way as the class-conditional loss (Eq. 2).",
"This loss consists of matching statistics of the features for each style separately.",
"This means that when transferring from style s i to s j , we match the features of the resulting sentence with the features of real data that are from the target style s j .",
"(Zhang et al., 2017a) proposes Adversarial Feature Matching for Text Generation by adding a reconstruction feature loss to the GAN objective.",
"This is different from our setup, as our discriminator is not learned, and our feature matching is per token and not on a global sentence level.",
"Sequence GAN (SeqGAN) (Yu et al., 2017), MaliGAN (Che et al., 2017), and RankGAN (Lin et al., 2017) use a pretrained generator with MLE loss with a per token reward discriminator that is trained with reinforcement learning.",
"SeqGFMN is similar to SeqGAN in the sense that it has a per token reward (per token feature matching loss).",
"Still, it alleviates the need for pre-training the generator and the cumbersome training of a discriminator by relying on a fixed, state-of-the-art, text feature extractor such as BERT.",
"Due to the discrete nature of the problem, training implicit models is tricky (de Masson d'Autume et al., 2019), which is addressed by using REINFORCE, actor-critic methods (Fedus et al., 2018), and Gumbel softmax trick(Kusner and Hernandez-Lobato, 2016).",
"For unsupervised text style transfer, different adaptations of the encoder-decoder framework have been proposed recently.",
"(Shen et al., 2017; Fu et al., 2018) uses adversarial classifiers to decode to a different style/language.",
"(Melnyk et al., 2017),(Nogueira dos Santos et al., 2018) proposed a method that combines a collaborative classifier with the back-transfer loss.",
"(Prabhumoye et al., 2018) presented an approach that trains different encoders, one per style, by combining the encoder of a pre-trained NMT and style classifiers.",
"The main difference between our approach and these previous work consists in the fact that we use the feature matching loss to perform distribution matching.",
"Datasets : We evaluate our proposed approach on three different english datasets: MSCOCO (Lin et al., 2014), EMNLP 2017 WMT News dataset (Bojar et al., 2017), and Yelp Reviews Dataset (Shen et al., 2017).",
"Both COCO and WMT News datasets are used for unconditional models, while Yelp Reviews is employed to evaluate class-conditional generation and unsupervised text style transfer.",
"Feature Extractors for Textual Data: We experiment with different feature extractors that generate token-level representations.",
"We use word embeddings from GloVe (Pennington et al., 2014) and FastText (Bojanowski et al., 2017) as representatives of shallow (cheap-to-train) architectures.",
"As a representative of large, deep feature extractor we use BERT (Devlin et al., 2018).",
"Devlin et al. (2018) demonstrated that the features extracted by BERT can boost the performance of diverse NLP tasks.",
"Our hypothesis is that BERT features are informative enough to allow the training of (cross-domain) text generators with the help of feature matching.",
"Metrics: In order to evaluate the diversity and quality of texts of the unconditional generators we use three metrics BLEU (Papineni et al., 2002), Self-BLEU (Zhu et al., 2018) and Frechet Infersent Distance, FID (Heusel et al., 2017).",
"Additionally, for class-conditional generation and unsupervised text style transfer, we report accuracy scores from a CNN sentiment classifier trained on the Yelp.",
"Unconditional Text Generation : In Tab.",
"1, we show quantitative results for SeqGFMN trained on COCO and WMT News using different feature extractors.",
"As expected, BERT as a feature extractor gives better performance because of a more significant and richer features used.",
"We also present a comparison with other implicit generative models for text generation from scratch.",
"We compare SeqGFMN with five different GAN approaches: SeqGAN (Yu et al., 2017), MaliGAN (Che et al., 2017), RankGAN (Lin et al., 2017), TextGAN (Zhang et al., 2017a) and RelGAN (Weili Nie and Patel, 2019).",
"We do not use generator pre-training for any of the models.",
"As reported in Tab.",
"1, SeqGFMN outperforms all GAN models in terms of BLEU and FID.",
"The combination of low BLEU and low Self-BLEU for the different GANs indicates that the learned models generate random n-grams that do not appear in the test set.",
"All GANs fail to learn reasonable models due to the challenges of learning a discrete data generator from scratch under the min-max game.",
"Whereas, SeqGFMN can learn suitable generators without the need of generator pre-training.",
"Class-conditional Generation : Conditional generation experiments were conducted on Yelp Reviews dataset with sentiment labels (178K negative, 268K positive).",
"For this experiment, we first pre-trained the Generator using a conditional denoising AE where class labels are provided only to the decoder D .",
"The architecture of the encoder is the same as in (Zhang et al., 2017b) with three strided convolutional layers.",
"Once pre-trained, D is used as initialization for our Generator G .",
"The training is similar Model BLEU-2 BLEU-3 BLEU-4 BLEU-5 Self-BLEU FID COCO Real Data 0.721 0.494 0.308 0.194 0.487 3.559 SeqGAN 0.044 0.019 0.012 0.010 0.026 13.167 MaliGAN 0.042 0.017 0.011 0.008 0.032 15.855 RankGAN 0.039 0.016 0.010 0.008 0.023 15.502 TextGAN 0.034 0.015 0.010 0.008 0.624 17.275 RelGAN 0.230 0.055 0.026 0.017 0.811 13.948 SeqGFMN (FastText) 0.389 0.153 0.089 0.059 0.644 6.371 SeqGFMN (Glove) 0.403 0.139 0.077 0.053 0.655 6.218 SeqGFMN (BERT) 0.695 0.476 0.277 0.186 0.802 5.610 WMT News Real Data 0.852 0.596 0.356 0.199 0.289 0.365 SeqGAN 0.008 0.004 0.003 0.003 0.088 8.731 MaliGAN 0.070 0.021 0.012 0.008 0.018 9.057 RankGAN 0.188 0.055 0.024 0.015 0.973 12.306 TextGAN 0.053 0.018 0.010 0.008 0.644 9.945 RelGAN 0.076 0.026 0.015 0.012 0.451 8.809 SeqGFMN (FastText) 0.364 0.102 0.045 0.028 0.787 3.761 SeqGFMN (Glove) 0.385 0.106 0.047 0.029 0.735 4.033 SeqGFMN (BERT) 0.760 0.464 0.204 0.096 0.888 3.530 Table 1: Quantitative results for different implicit generators trained from scratch.",
"to the previous section except now sentiment class labels are passed to G , and class-dependent statistics of BERT features are used, as described in 2.2.",
"Tab.",
"2 presents results for our regular model (baseline) and the three conditional generators: Cond.",
"Noise, Cond.",
"Batch Normalization (BN), Cond.",
"Noise+BN.",
"We use 10K generated sentences for each sentiment class to compute classification accuracy.",
"In terms of accuracy and BLEU-3 score, the Cond.",
"Noise+BN model provides the best generator as it is able to capture and leverage the class information.",
"Unsupervised Text Style Transfer (UTST) : In Table 3, we report BLEU and accuracy scores for SeqGFMN and six baselines: BackTranslation (Prab-humoye et al., 2018), which uses back-transfer loss; CrossAligned (Shen et al., 2017), MultiDecoder (Fu et al., 2018), and StyleEmbedding (Fu et al., 2018), which use adversarial loss; and TemplateBased (Li et al., 2018) and Del-Retrieval (Li et al., 2018), which uses rule-based methods.",
"The BLEU score is computed between the transferred sentences and the human-annotated transferred references, similar to (Li et al., 2018).",
"And, the accuracy is based on our pre-trained classifier.",
"Compared to the other models, SeqGFMN produces the best balance between BLEU and accuracy.",
"Additionally, if we use back-transfer loss together with feature matching loss ( SeqGFMN + BT ) our model gets a significant improvement on both metrics.",
"We presented new implicit generative models based on feature matching loss that are suitable for unconditional and conditional text generation.",
"Our results demonstrated that backpropagating through discrete data is not an issue for the training via matching distributions at the token level.",
"SeqGFMN can be trained from scratch without the need for RL or Gumbel Softmax.",
"This approach has allowed us to create effective models for unconditional generation, class-conditional generation, and unsupervised text style transfer.",
"We believe this work opens a new competitive avenue in the area of implicit generative models for sequential data."
] | [
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"method",
"method",
"result",
"objective",
"abstain",
"method",
"objective",
"abstain",
"method",
"result",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"method",
"method",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"method",
"objective"
] |
[
"We study the problem of event coreference resolution (ECR) that seeks to group coreferent event mentions into the same clusters.",
"Deep learning methods have recently been applied for this task to deliver state-of-the-art performance.",
"However, existing deep learning models for ECR are limited in that they cannot exploit important interactions between relevant objects for ECR, e.g., context words and entity mentions, to support the encoding of document-level context.",
"In addition, consistency constraints between golden and predicted clusters of event mentions have not been considered to improve representation learning in prior deep learning models for ECR.",
"This work addresses such limitations by introducing a novel deep learning model for ECR.",
"At the core of our model are document structures to explicitly capture relevant objects for ECR.",
"Our document structures introduce diverse knowledge sources (discourse, syntax, semantics) to compute edges/interactions between structure nodes for document-level representation learning.",
"We also present novel regularization techniques based on consistencies of golden and predicted clusters for event mentions in documents.",
"Extensive experiments show that our model achieve state-of-the-art performance on two benchmark datasets.",
"Event coreference resolution (ECR) is the task of clustering event mentions (i.e., trigger words that evoke an event) in a document such that each cluster represents a unique real world event.",
"For example, the three event mentions in Figure 1, i.e., refuse to sign , raised objections , and doesn't sign , should be grouped into the same cluster to indicate their coreference to the same event.",
"A common component in prior ECR models involves a binary classifier that receives a pair of event mentions and predict their coreference (Chen et al., 2009; Lu et al., 2016; Lu and Ng, 2017).",
"To this end, an important step in ECR models is to transform event mention pairs into representation vectors to encode discriminative features for coreference prediction.",
"Early work on ECR has achieved feature representation via feature engineering where multiple features are hand-designed for input event mention pairs (Lu and Ng, 2017).",
"A major problem with feature engineering is the sparsity of the features that limits the generalization to unseen data.",
"Representation learning in deep learning models has recently been introduced to address this issue, leading to more robust methods with better performance for ECR (Nguyen et al., 2016; Choubey and Huang, 2018; Huang et al., 2019; Barhom et al., 2019).",
"As such, there are at least two limitations in existing deep learning models for ECR that will be addressed in this work to improve the performance.",
"First, as event mentions pairs for coreference prediction might belong to long-distance sentences in documents, capturing document-level context between the event mentions (i.e., beyond the two sentences that host the event mentions) might present useful information for ECR.",
"As their first limitation, prior deep learning models for ECR has only attempted to encode document-level context via hand-designed features (Kenyon-Dean et al., 2018; Barhom et al., 2019) that still suffer from the feature sparsity issue.",
"In addition, such prior work is unable to exploit ECR-related objects in documents (e.g., entity mentions, context words) and their connections/interactions (possibly beyond sentence boundary) to aid representation learning.",
"An example for the importance of context words, entity mentions, and their interactions for ECR can be seen in Figure 1.",
"Here, to decisively determine the coreference of raised objections and doesn't sign , ECR systems should recognize Trump and the Donald Trump continued to refuse to sign a relief package agreed in Congress and headed instead to the golf course. Trump, who is spending the Christmas and New Year holiday at his Mar-a-Lago resort in Florida, raised objections to the $900bn relief bill only after it was passed by Congress last week, having been negotiated by his own treasury secretary Steven Mnuchin All these folks and their families will suffer if Trump doesn't sign the damn bill. Coreferential event mentions Coreferential entity mentions Figure 1: An example for event coreference resolution. $900bn relief bill as the arguments of raised objections , and Trump and the damn bill as the arguments of doesn't sign .",
"The systems should also be able to realize the coreference relation between the two entity mentions Trump , and between the $900bn relief bill and the damn bill to conclude the same identity for the event mentions (i.e., as they involve the same arguments).",
"As such, it is helpful to identify relevant entity mentions, context words and leverage their rela-tions/interactions to improve representation vectors for event mentions in ECR.",
"Motivated by this issue, we propose to form graphs for documents (called document structures) to explicitly capture relevant objects and interactions for ECR that will be consumed to learn representation vectors for event mentions.",
"In particular, context words, entity mentions, and event mentions will serve as the nodes in our document structures due to their intuitive relevance to ECR.",
"Different types of knowledge sources will then be exploited to connect the nodes for the document structures, featuring discourse information (e.g., to connect coreferring entity men-tions), syntactic information (e.g., to directly link event mentions and their arguments), and semantic similarity (e.g., to connect words/event mentions with similar meanings).",
"Such rich document structures allows us to model the interactions of relevant objects for ECR beyond sentence level for document-level context.",
"Using graph convolutional neural networks (GCN) (Kipf and Welling, 2017; Nguyen and Grishman, 2018) for representation learning, we expect enriched representation vectors from the document structures can further improve the performance of ECR systems.",
"To our knowledge, this is the first time that rich document structures are employed for ECR.",
"Second, prior deep learning models for ECR fails to leverage consistencies between golden clusters (provided by human) and predicted clusters (generated by models) to promote representation learning.",
"In particular, it is intuitive that ECR models can achieve better performance if their predicted event clusters are more similar to the golden event clusters in the data.",
"To this end, we propose to obtain different inconsistency measures between golden and predicted clusters that will be incorporated into the overall loss function for minimization.",
"As such, we expect that the consistency/similarity regularization between two types of clusters can provide useful training signals to improve representation vectors for event mentions in ECR.",
"To our knowledge, this is also the first work to exploit cluster consistency-based regularization for representation learning in ECR.",
"Finally, we conduct extensive experiments for ECR on the KBP benchmark datasets.",
"The experiments demonstrate the benefits of the proposed methods and lead to state-of-the-art performance for ECR.",
"Event coreference resolution is broadly related to works on entity coreference resolution that aim to resolve nouns phrases/mentions for entities (Raghu-nathan et al., 2010; Ng, 2010; Durrett and Klein, 2013; Lee et al., 2017a; Joshi et al., 2019b,a).",
"However, resolving event mentions has been considered as a more challenging task than entity coreference resolution due to the more complex structures of event mentions (Yang et al., 2015).",
"Our work focuses on the within-document setting for ECR where input event mentions are expected to appear in the same input documents; however, we also note prior works on cross-document ECR (Lee et al., 2012a; Adrian Bejan and Harabagiu, 2014; Choubey and Huang, 2017; Kenyon-Dean et al., 2018; Barhom et al., 2019; Cattan et al., 2020).",
"As such, for within-document ECR, previous methods have applied feature-based models for pairwise classifiers (Ahn, 2006; Chen et al., 2009; Cybulska and Vossen, 2015; Peng et al., 2016), spectral graph clustering (Chen and Ji, 2009), information propagation (Liu et al., 2014), markov logic networks (Lu et al., 2016), joint modeling of ECR with event detection (Araki and Mi-tamura, 2015; Lu et al., 2016; Chen and Ng, 2016; Lu and Ng, 2017), and recent deep learning models (Nguyen et al., 2016; Choubey and Huang, 2018; Huang et al., 2019; Lu et al., 2020; Choubey et al., 2020).",
"Compared to previous deep learning works for ECR, our model presents a novel representation learning framework based on document structures to explicitly encode important interactions between relevant objects, and representation regularization to exploit the cluster consistency between golden and predicted clusters for event mentions.",
"Formally, in ECR, given an input document D = w 1 , w 2 , . . . , w N (of N words/tokens) with a set of event mentions E = { e 1 , e 2 , . . . , e | E | } , the goal is to group the event mentions in E into clusters to capture the coreference relation between mentions.",
"Our ECR model consists of four major components:",
"(i) Document Encoder to words into representation vectors,",
"(ii) Document Structure to create graphs for documents and learn rich representation vectors for event mentions,",
"(iii) End-to-end Resolution to simultaneously resolve the coreference for the entity mentions in D , and",
"(iv) Cluster Consistency Regularization to regularize representation vectors based on consistency constraints between golden and predict event mention clusters.",
"Figure 2 presents an overview of our model for ECR.",
"In the first step, we transform each word w i D into a representation vector x i by feeding D into the pre-trained language model BERT (Devlin et al., 2019).",
"In particular, as BERT might split w i into several word-pieces, we average the hidden vectors of the word-pieces of w i in the last layer of BERT to obtain the representation vector x i for w i .",
"To handle long documents with BERT, we divide D into segments of 512 consecutive word-pieces that will be encoded separately.",
"The resulting sequence X = x 1 , x 2 , . . . , x n for D is then sent to the next steps for further computation.",
"This component aims to learn representation vectors for the event mentions in E using an interaction graph G = {N , E} for D that facilitates the enrichment of representation vectors for event mentions with relevant objects and interactions at document level.",
"As such, the nodes and edges in G for our ECR problem are constructed as follows: Nodes : The node set N for our interaction graph G should capture relevant objects for the coreference between event mentions in D .",
"Toward this goal, we consider all the context words (i.e., w i ), event mentions, and entity mentions in D as relevant objects for our ECR problem.",
"For convenience, let M = { m 1 , m 2 , . . . , m | M | } be the set of entity mentions in D .",
"The node set N for G is thus created by the union of D , E , and M : N = D E M = { n 1 , n 2 , . . . , n |N| } .",
"To achieve a fair comparison, we use the predicted event mentions that are provided by (Choubey and Huang, 2018) in the datasets for E .",
"The Stanford CoreNLP toolkit is employed to obtain the entity mentions M .",
"Edges : The edges between the nodes in N for G will be represented by an adjacency matrix A = { a ij } i,j = |N| ( a ij R ) in this work.",
"As A will be consumed by Graph Convolutional Networks (GCN) to learn representation vectors for ECR, the value/score a ij between two nodes n i and n j in N is expected to estimate the importance (or the level of interaction) of n j for the representation computation of n i .",
"This structure allows n i and n j of N to directly interact and influence the representation computation of each other even if they are sequentially far away from each other in D .",
"As presented in the introduction, we explore three types of information to design the edges E (or compute the interaction scores a ij ) for G in our model, including discourse-based, syntax-based and semantic-based information.",
"Discourse-based Edges : Due to multiple sentences and event/entity mentions involved in the input document D , we need to understand where such objects span and how they relate to each other to effectively encode document context for ECR.",
"To this end, we propose to exploit three types of discourse information to obtain the interaction graph G , i.e., sentence boundary, coreference structure, and mention span for event/entity mentions in D .",
"Sentence Boundary : Our motivation for this information is that event/entity mentions appearing Donald Trump continue refuse to sign .",
"in the same sentences tend to be more contextually related to each other than those in different sentences.",
"As such, event/entity mentions in the same sentences might involve more helpful information for the representation computation of each other in our problem.",
"To capture this intuition, we compute the sentence boundary-based interaction score a sentij for the nodes n i and n j in N where a sentij = 1 if n i and n j are the event/entity mentions of the same sentences in D (i.e., n i , n j E M ); and 0 otherwise.",
"We will use a sentij as an input to compute the overall interaction score a ij for G later.",
"Entity Coreference Structure : Instead of considering within-sentence information as in a sentij , coreference structure focuses on the connection of entity mentions across sentences to enrich their representations with the contextual information of the coreferring ones.",
"As such, to enable the interaction of representations for coreferring enity mentions, we compute the conference-based score a coref ij for each pair of nodes n i and n j to contribute to the overall score a ij for representation learning.",
"Here, a corefij is set to 1 if n i and n j are coreferring entity mentions in D , and 0 otherwise.",
"Note that we also use the Stanford CoreNLP toolkit to determine the coreference of entity mentions in this work.",
"Mention Span : The sentence boundary and coreference structure scores model interactions of event and entity mentions in D based on discourse structure.",
"To connect event and entity mentions to context words w i for representation learning, we employ the mention span-based interaction score a spanij as another input for a ij .",
"Here, a spanij is only set to 1 (i.e., 0 otherwise) if n i is a word ( n i D ) in the span of the entity/event mention n j ( n j E M ) or vice verse.",
"a spanij is important as it helps ground representation vectors of event/entity mentions to the contextual information in D .",
"Syntax-based Edges : We expect the dependency trees of the sentences in D to provide beneficial information to connect the nodes in N for effective representation learning in ECR.",
"For example, dependency trees have been used to retrieve important context words between an event mentions and their arguments in prior work (Li et al., 2013; Veyseh et al., 2020a,b).",
"To this end, we propose to employ the dependency relations/connections between the words in D to obtain a syntax-based interaction score a depij for each pair of nodes n i and n j in N , serving as an additional input for a ij .",
"In particular, by inheriting the graph structures of the dependency trees of the sentences in D , we set a dep ij to 1 if n i and n j are two words in the same sentence (i.e., n i , n j D ) and there is an edge between them in the corresponding dependency tree 1 , and 0 otherwise.",
"Semantic-based Edges : This information leverages the semantic similarity of the nodes in N to enrich the overall interaction scores a ij for G .",
"Our motivation is that a node n i will contribute more to the representation computation of another node n j for ECR if n i is more semantically related to n j .",
"In particular, as the representation vectors for the nodes in N have captured the contextual semantics of the words in D , we propose to explore 1 We use Stanford CoreNLP to parse sentences.",
"a novel source of semantic information that relies on external knowledge for the words to compute interaction scores between the nodes N in our document structures for ECR.",
"We expect the external knowledge for the words to provide complementary information to the contextual information in D , thus further enriching the overall interaction scores a ij for the nodes in N .",
"To this end, we propose to utilize WordNet (Miller, 1995), a rich network of word meanings, to obtain external knowledge for the words in D .",
"The word meanings (i.e., synsets) in WordNet are connected to each other via different semantic relations (e.g., synonyms, hyponyms).",
"In particular, our first step to generate knowledge-based similarity scores involves mapping each word node n i D N to a synset node M i in WordNet using a Word Sense Disambiguation (WSD) tool.",
"In particular, we employ WordNet 3.0 and the state-of-the-art BERT-based WSD model in (Blevins and Zettlemoyer, 2020) to perform the word-synset mapping in this work.",
"Afterward, we compute a knowledge-based similarity score a structij for each pair of word nodes n i and n j in D N using the structure-based similarity of their linked synsets M i and M j in WordNet (i.e., a structij = 0 if either n i or n j is not a word node in D N ).",
"Accordingly, the Lin similarity measure (Lin et al., 1998) for synset nodes in WordNet is utilized for this purpose: a structij = 2 IC ( LCS ( M i ,M j )) IC ( M i )+ IC ( M j ) .",
"Here, IC and LCS represent the information content of synset nodes and the least common sub-sumer of two synsets in the WordNet hierarchy (the most specific ancestor node) respectively 2 .",
"Structure Combination : Up to now, five scores have been generated to capture the level of interactions in representation learning for each pair of nodes n i and n j in N according to different information sources (i.e., a sentij , a corefij , a spanij , a depij and a struct ij ).",
"For convenience, we group the five scores for each node pair n i and n j into a vector d ij = [ a sentij , a corefij , a spanij , a depij , a structij ] of size 5.",
"To combine the scores in d ij into an overall rich interaction score a ij for n i and n j in G , we use the following normalization: a ij = exp( d ij q T ) / (cid:88) u =1",
"2 We use the nltk tool to obtain the Lin similarity: https://www.nltk.org/howto/wordnet.",
"html .",
"We tried other WordNet-based similarities available in nltk (e.g., Wu-Palmer similarity), but the Lin similarity produced the best results in our experiments.",
"Representation Learning : Given the combined interaction graph G with the adjacency matrix A = { a ij } i,j = |N| , we use GCNs to induce representation vectors for the nodes in N for ECR.",
"In particular, our GCN model takes the initial representation vectors v i of the nodes n i N as the input.",
"Here, the initial representation vector v i for a word node n i D is directly obtained from the BERT-based representation vector x c X (i.e., v i = x c ) of the corresponding word w c for n i .",
"In contrast, for event and entity mentions, their initial representation vectors are obtained by max-pooling the contextualized embedding vectors in X that correspond to the words in the event/entity men-tions' spans.",
"For convenience, we organize v i into rows of the input matrix H 0 = [ v 1 , . . . , v |N| ] .",
"The GCN model then involves G layers that generate the matrix H l at the l -th layer for the nodes in N ( 1 l G ) via: H l = ReLU ( AH l 1 W l ) ( W l is the weight matrix for the l -th layer).",
"The output of the GCN model after G layers is HG whose rows are denoted by HG = [ h 1 , . . . , h |N| ] , serving as more abstract representation vectors for the nodes n i in the coreference prediction for event mentions.",
"Also, for convenience, let { r e 1 , . . . , r e | E | } HG be the set of GCN-induced representation vectors for the event mention nodes in e 1 , . . . , e | E | in E .",
"To facilitate the incorporation of the consistency regularization between golden and predicted clusters into the training process, we perform and end-to-end procedure that seeks to simultaneously resolve the coreference for the event mentions in E in a single process.",
"Motivated by the entity coreference resolution in (Lee et al., 2017b), we implement the end-to-end resolution via a set of antecedent assignments for the event mentions in E .",
"In particular, we assume that the event mentions in E are enumerated in their appearance order in D .",
"As such, our model aims to link each event mention e i E to one of its prior event mention in the set Y i = { (cid:15), e 1 , . . . , e i 1 } ( (cid:15) is a dumpy antecedent).",
"Here, a link of e i to a non-dumpy antecedent e j in Y i represents a coreference relation between e i and e j .",
"In contrast, a dumpy assignment for e i indicates that e i is not coreferent with any prior event mention.",
"By forming a coreference graph with e i as the nodes, the non-dumpy antecedent assignments for every event mention in E can be utitlized to connect coreference event mentions.",
"Connected components from the coreference graph can then be returned to serve as predicted event mention clusters in D .",
"In order to predict the coreferent antecedent y i Y for an event mention e i , we compute the distribution over the possible antecedents in Y i for e i via: P ( y i | e i , Y i ) = e s ( ei,yi ) (cid:80) y (cid:48)Y ( i ) e s ( ei,y (cid:48) ) where s ( e i , e j ) is a score function to determine the coreference likelihood between e i and e j in D .",
"To this end, we set s ( e i , (cid:15) ) = 0 for all e i E .",
"Inspired by (Lee et al., 2017b), we obtain the score function s ( e i , e j ) for e i and e j by leveraging their GCN-induced representation vectors r e i and r e j via: s ( e i , e j ) = s m ( e i ) + s m ( e j ) + s c ( e i , e j ) + s a ( e i , e j ) s m ( e i ) = w (cid:62) m FF m ( r e i ) s c ( e i , e j ) = w (cid:62) a FF c ([ r e i , r e j , r e i (cid:12) r e j ]) s a ( e i , e j ) = r (cid:62) e i W c r e j where F m and FF c are two-layer feed-forward networks, w (cid:62) m and w (cid:62) a are learnable vectors, W c is a weight matrix, and (cid:12) is the element-wise multiplication.",
"At the inference time, we employ the greedy decoding to predict the antecedent y i for e i : y i = argmax P ( y i | e i , Y i ) .",
"For training, we use the negative log-likelihood as the loss function in our end-to-end framework: L pred = (cid:80) | E | i =0 log P ( y i | e i , Y i ) ( y i Y i is the golden antecedent for e i ).",
"To further improve representation learning for ECR, we propose to regularize the induced representation vectors of the event mentions in E to explicitly enforce the consistency/similarity between golden and predicted event mention clusters in D .",
"This is based on our motivation that ECR models will perform better if they can produce more similar event mention clusters to the golden ones.",
"As such, for convenience, let T = { T 1 , T 2 , . . . , T |T | } and P = { P 1 , P 2 , . . . , P |P| } be the golden and predicted sets of event mentions in E respectively, i.e., T i , P j E , and T 1 T 2 . . . T |T | = P 1 P 2 . . . P |P| = E .",
"Also, for each cluster C in T or P , we compute a centroid vector r C for it by averaging the representation vectors of the event mention members: r C = average e C ( r e ) .",
"This leads to the centroid vectors { r T 1 , r T 2 , . . . , r T |T | } and { r P 1 , r P 2 , . . . , r P |P| } for T and P respectively.",
"We propose the following regularization terms for cluster consistency: Intra-cluster Consistency : This constraint concerns the inner information of each cluster, characterizing the structure of each individual event mention in its golden and predicted clusters in T and P .",
"In particular, for each event mention e i E , we expect its distances to the centroid vectors of the corresponding golden and predicted clusters T (cid:48) i and P (cid:48) i in T and P (respectively) to be similar, i.e., T (cid:48) i T , P (cid:48) i P , e i T (cid:48) i , e i P (cid:48) i .",
"As such, we compute the distances between the representation vector r e i of e i to the centroid vectors r T (cid:48) i and r P (cid:48) i via the Euclidean distances (cid:107) r e i r T (cid:48) i (cid:107) 22 and (cid:107) r e i r P (cid:48) i (cid:107) 22 .",
"Afterward, the differences L inner between the two distances for golden and predicted clusters are aggregated over all event mentions and added into the overall loss function for minimization: L inner = (cid:80) | E | i =1 |(cid:107) r e i r T (cid:48) i (cid:107) 22 (cid:107) r e i r P (cid:48) i (cid:107) 22 | .",
"Inter-cluster Consistency : In this constraint, we expect that the structure among the clusters T i in the golden set T is consistent with those for the predicted event cluster set P (i.e., inter-cluster regulation).",
"To implement this idea, we encode the structure of the clusters in a set via the average of the pairwise distances between the centroid vectors of the clusters.",
"In particular, the inter-cluster structure scores for the golden and predicted clusters in T and P are computed via: s T = 2 |T | ( |T | 1) (cid:80) |T | i =1 (cid:80) |T | j = i +1 (cid:107) r T i r T j (cid:107) 22 , and s P = 2 |P| ( |P| 1) (cid:80) |P| i =1 (cid:80) |P| j = i +1 (cid:107) r P i r P j (cid:107) 22 .",
"The difference between the structure scores for golden and predicted clusters T and P is then included into the overall loss function for minimization: L inter = | s T s P | .",
"Inter-set Similarity : This constraint aims to directly promote the similarity between the golden clusters in T and the predicted clusters in P .",
"As such, for the golden and predicted cluster sets T and P , we first obtain the overall centroid vectors u T and u P (respectively) by averaging the centroid vectors of their member clusters: u T = average T T ( r T ) and u P = average P P ( r P ) .",
"The Euclidean distance L sim is then integrated into the overall loss for minimization: L sim = (cid:107) u T u P (cid:107) 22 .",
"Note that L inner , L inter , and L sim will be zero if the predicted clusters in P are the same as those in the golden clusters in T .",
"To summarize, the overall loss function L to train our ECR model in this work is: L = L pred + inner L inner + inter L inter + sim L sim with inner , inter , and sim as the trade-off parameters.",
"Following prior work (Choubey and Huang, 2018), we train our ECR models on the KBP 2015 dataset (Mitamura et al., 2015) and evaluate the models on the KBP 2016 and KBP 2017 datasets for ECR (Mitamura et al., 2016, 2017).",
"In particular, the KBP 2015 dataset includes 360 annotated documents for ECR (181 documents from discussion forum and 179 documents from news articles).",
"We use the same 310 documents from KBP 2015 as in (Choubey and Huang, 2018) for the training data and the remaining 50 documents for the development data.",
"Also, similar to (Choubey and Huang, 2018), the news articles in KBP 2016 (85 documents) and KBP 2017 (83 documents) are leveraged for test datasets.",
"To ensure a fair comparison, we use the predicted event mentions provided by (Choubey and Huang, 2018) in all the datasets.",
"Finally, we report the ECR performance based on the official KBP 2017 scorer (version 1.8) 3 .",
"The scorer employs four coreference scoring measures, including MUC (Vilain et al., 1995), B 3 (Bagga and Baldwin, 1998), CEAF-e (Luo, 2005), BLANC (Lee et al., 2012b), and the unweighted average of their F1 scores (AVGF 1 ).",
"Hyper-parameters for the models are fine-tuned by the AVGF 1 scores over development data.",
"The selected values from the tuning process include: 1 e 5 for the learning rate of the Adam optimizer (se-lected from [ 1 e -5, 2 e -5, 3 e -5, 4 e -5, 5 e -5]); 8 for the mini-batch size (selected from [8 , 16 , 32 , 64] ); 128 hidden units for all the feed-forward network and GCN layers (selected from [64 , 128 , 256 , 512] ); 2 layers for the GCN model, G = 2 (selected from [1 , 2 , 3 , 4] ), and inner = 0 .",
"1 , inter = 0 .",
"1 , and sim = 0 .",
"1 for the trade-off parameters in the overall loss function L (selected from [0 . 1 , 0 , 2 , . . . , 0 . 9] ).",
"Finally, we use the BERT base model (of 768 dimensions) for the pre-trained word embeddings (updated during the training).",
"We compare the proposed model for ECR with document structures and cluster consistency regularization (called StructECR) with prior work ECR models in the same evaluation setting, including the joint model between ECR and event detection (Lu and Ng, 2017), the integer linear programming",
"approach in (Choubey and Huang, 2018), and the discourse structure profiling model in (Choubey et al., 2020) (also the model with the best reported performance in KBP datasets).",
"In addition, we examine the following baselines of StructECR to highlight the benefits of the proposed components: E2E-Only : This variant implements the end-to-end resolution model described in Section 3.3 where all event mentions in a document are resolved simultaneously in a single process.",
"However, different from our full model StructECR, E2E-Only does not include the document structure component with GCN for representation learning, i.e., it directly uses the initial representation vectors v i (induced from BERT) for the event mentions in the computation of the distribution P ( y i | e i , Y i ) .",
"Also, the cluster consistency regularization in Section 3.4 is also not included in this model.",
"Pairwise : This model is similar to E2E-Only in that it does not applies the document structures and regularization terms in StructECR.",
"In addition, instead of simultaneously resolving event mentions in documents, Pairwise predicts the coreference for every pair of event mentions separately.",
"In particular, the representation vectors v e i and v e j for two event mentions e i and e j (included from BERT) are combined via [ v e i , v e j , v e i (cid:12) v e j ] .",
"This vector is then sent into a feed-forward network to produce a distribution over possible coreference labels between e i and e j (i.e., two labels for being coreferent or not).",
"The coreference labels for every pair of event mentions are then gathered in a coreference graphs among event mentions; the connected components will be returned for the event clusters.",
"Table 1 reports the performance of the ECR models on the KBP 2016 and KBP 2017 datasets.",
"As can be seen from the table, E2E-Only performs comparably or better than prior state-of-the-art models for ECR, e.g., (Choubey and Huang, 2018) and (Choubey et al., 2020), that employ extensive feature engineering.",
"In addition, the better performance of E2E-Only over Pairwise (for both KBP 2016 and KBP 2017) illustrates the benefits of end-to-end coreference resolution for event mentions in documents.",
"Most importantly, the proposed model StructECR significantly outperforms all the baseline models for which the performance improvement over E2E-Only is 1.94% and 1.26% (i.e., AVGF 1 scores) over the KBP 2016 and KBP 2017 datasets respectively.",
"This clearly demonstrates the benefits of the proposed ECR model with rich KBP 2016 KBP 2017 Model B 3 CEAF e MUC BLANC AVGF 1 B 3 CEAF e MUC BLANC AVGF 1 (Lu and Ng, 2017) 50.16 48.59 32.41 32.72 40.97 ---(Choubey and Huang, 2018) 51.67 49.10 34.08 34.08 42.23 50.35 48.61 37.24 31.94 42.04 (Choubey et al., 2020) 52.78 49.70 34.62 34.49 42.90 51.68 50.57 37.8 33.39 43.36 Pairwise 52.16 49.84 30.79 32.21 41.25 50.97 48.80 36.92 31.86 42.14 E2E-Only 50.89 50.43 36.05 33.93 42.83 51.60 52.03 38.53 33.02 43.80 StructECR 52.77 52.29 38.37 35.66 44.77 51.93 52.82 40.73 34.75 45.06 Table 1: Models' performance on the KBP 2016 and KBP 2017 datasets.",
"Two major components in the proposed model StructECR involve the document structures and the cluster consistency regularization.",
"This section performs an ablation study to reveal the contribution of such components for the full model.",
"First, for the document structures, we examine the following ablated models:",
"(i) StructECR x : where x is one of the five interaction scores used to compute the unified score a ij for G (i.e., a sentij , a corefij , a spanij , a depij , and a structij ).",
"For example, StructECR a spanij implies a variant of StructECR where the span-based interaction score a spanij is not included in the compuation of the overall score a ij ;",
"(ii) StructECR Entity Nodes : this model excludes the entity mention nodes from the interaction graph G in StructECR (i.e., N = D E only);",
"(iii) StructECR GraphCombine : instead of unifying the five interaction scores in d ij into an overall score a ij in Equation 1, this model considers each of the five generated interaction scores as forming a separate interaction graph, thus producing six different graphs.",
"The GCN model is then applied over those five graphs (using the same initial representation vectors v i for the nodes n i in N ).",
"The outputs of the GCN model for the same node n i (with different graphs) are then concatenated to compute the final representation vector h i for n i ; and",
"(iv) StructECR Doc Structures : this model removes the GCN model from StructECR.",
"As such, the interaction graph G is not used and the GCN-induced representation vectors h i are replaced by the initial BERT-induced representation vectors v i in the computation for end-to-end resolution and consistency regularization.",
"Second, for the cluster consistency regularization, we evaluate the following ablated models for StructECR:",
"(v) StructECR y ( y Model B 3 CEAF e MUC BLANC AVGF 1 StructECR (full) 76.86 69.99 66.40 69.02 70.57 StructECR a sentij 75.37 69.73 62.42 69.49 69.25 StructECR a corefij 75.07 69.74 62.97 69.67 69.36 StructECR a spanij 75.32 70.32 63.44 66.97 69.01 StructECR a depij 74.66 69.76 62.72 69.14 69.07 StructECR a structij 75.44 69.53 61.82 71.48 69.57 StructECR Entity Nodes 74.67 69.71 63.01 67.35 68.69 StructECR GraphCombine 75.41 69.74 62.38 68.90 69.11 StructECR Doc Structures 74.15 66.78 60.24 66.32 66.87 StructECR L inner 75.09 68.44 62.25 68.01 68.45 StructECR L inter 74.80 67.98 61.92 67.71 68.10 StructECR L sim 75.13 68.12 62.03 68.95 68.56 StructECR Regularization 74.46 67.55 60.74 68.28 67.76 Table 2: Performance on the KBP 2015 dev set.",
"(vi) StructECR Regularization : this model completely ignores the consistency regularization component from StructECR.",
"Table 2 shows the performance of the models on the development data of the KBP 2015 dataset.",
"As can be seen, the elimination of any component from StructECR would significantly hurt the performance, thus clearly demonstrating the benefits of the designed document structures and cluster consistency regularization in StructECR.",
"To further demonstrate the benefits for the proposed model StructECR, we evaluate StructECR and the baseline models Pairwise and E2E-Only in the cross-domain setting.",
"In this setting, we aim to train the models on one domain (the source domain) and evaluate them on another domain (the target domain).",
"We leverage the KBP 2016 and KBP 2017 datasets for this experiment.",
"In particular, KBP 2016 annotates ECR data for 85 newswire and 84 discussion forum documents (i.e., two do-mains/genres) while KBP 2017 provides annotated data for ECR on 83 news articles and 84 discussion forum documents.",
"As such, for each dataset, we consider two setups where documents in one domain (i.e., newswire or discussion forum) are used for the source domain, leaving documents in the other domain for the target domain data.",
"We use the same hyper-parameters that are tuned on the development set of KBP 2015 for the models in this experiment.",
"Table 3 presents the performance of the models.",
"It is clear from the table that StructECR are significantly and substantially better than the baseline models ( p < 0 . 01 ) over different datasets and settings for the source and target domains, thereby confirming the domain generalization advantages of StructECR for ECR.",
"We present a novel end-to-end coreference resolution framework for event mentions based on deep learning.",
"The novelty in our model is twofold.",
"First, document structures are introduced to explicitly capture relevant objects and their interactions in documents to aid representation learning.",
"Second, several regularization techniques are proposed to exploit the consistencies between human-provided and machine-generated clusters of event mentions in documents.",
"We perform extensive experiments on two benchmark datasets for ECR to demonstrate the advantages of the proposed model.",
"In the future, we plan to extend our models to related problems in information extraction, e.g., event extraction.",
"This research has been supported by the Army Research Office (ARO) grant W911NF-21-1-0112.",
"This research is also based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2019-19051600006 under the Better Extraction from Text Towards Enhanced Retrieval (BETTER) Program.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ARO, ODNI, IARPA, the Department of Defense, or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.",
"This document does not contain technology or technical data controlled under either the U.S. International Traffic in Arms Regulations or the U.S. Export Administration Regulations."
] | [
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"objective",
"result",
"objective",
"method",
"abstain",
"other",
"other",
"method",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other"
] |
[
"Existing leading code comment generation approaches with the structure-to-sequence framework ignores the type information of the interpretation of the code, e.g., operator, string, etc.",
"However, introducing the type information into the existing framework is non-trivial due to the hierarchical dependence among the type information.",
"In order to address the issues above, we propose a Type Auxiliary Guiding encoder-decoder framework for the code comment generation task which considers the source code as an N-ary tree with type information associated with each node.",
"Specifi-cally, our framework is featured with a Type-associated Encoder and a Type-restricted Decoder which enables adaptive summarization of the source code.",
"We further propose a hierarchical reinforcement learning method to resolve the training difficulties of our proposed framework.",
"Extensive evaluations demonstrate the state-of-the-art performance of our framework with both the auto-evaluated metrics and case studies.",
"The comment for the programming code is critical for software development, which is crucial to the further maintenance of the project codebase with significant improvement of the readability (Aggar-wal et al., 2002; Tenny, 1988).",
"Code comment generation aims to automatically transform program code into natural language with the help of deep learning technologies to boost the efficiency of the code development.",
"Existing leading approaches address the code comment generation task under the structure-to-sequence (Struct2Seq) framework with an encoder-decoder manner by taking advantage of the inherent structural properties of the code.",
"source code have shown significant improvement to the quality of the generated comments (Liang and Zhu, 2018; Alon et al., 2018; Hu et al., 2018; Wan et al., 2018); Solutions representing source code as graphs have also shown high-quality comment generation abilities by taking advantage of extracting the structural information of the codes (Xu et al., 2018a,b; Fernandes et al., 2018).",
"Although promising results were reported, we observe that the information of the node type in the code is not considered in these aforementioned Struct2Seq based solutions.",
"The lack of such essential information lead to the following common limitations: 1) Losing the accuracy for encoding the source code with the same structure but has different types.",
"As shown in Fig.",
"1(a), a Tree-LSTM (Tai et al., 2015) encoder is illustrated to extract the structural information, the two subtrees of the code Select' and Compare' in the dashed box have the same structure but different types, with the ignorance of the type information, the traditional encoders illustrate the same set of neural network parameters to encode the tree, which leads to an inaccurate generation of the comment.",
"2) Losing both the efficiency and accuracy for searching the large vocabulary in the decoding procedure, especially for the out-of-vocabulary (OOV) words that exist in the source code but not in the target dictionary.",
"As shown in the Fig.",
"1(a), missing the type of ACL' node usually results in an unknown word UNK' in the generated comments.",
"Thus, the key to tackle these limitations is efficiently utilizing the node type information in the encoder-decoder framework.",
"To well utilize the type information, we propose a Type Auxiliary Guiding (TAG) encoder-decoder framework.",
"As shown in Fig.",
"1(b), in the encoding phase, we devise a Type-associated encoder to encode the type information in the encoding of the N-ary tree.",
"In the decoding phase, we facilitate the generation of the comments with the help of type information in a two-stage process naming operation selection and word selection to reduce the searching space for the comment output and avoid the out-of-vocabulary situation.",
"Considering that there is no ground-truth labels for the operation selection results in the two-stage generation process, we further devised a Hierarchical Reinforcement Learning (HRL) method to resolve the training of our framework.",
"Our proposed framework makes the following contributions: An adaptive Type-associated encoder which can summarize the information according to the node type; A Type-restricted decoder with a two-stage process to reduce the search space for the code comment generation; A hierarchical reinforcement learning approach that jointly optimizes the operation selection and word selection stages.",
"Code comment generation frameworks generate natural language from source code snippets, e.g. SQL, lambda-calculus expression and other programming languages.",
"As a specified natural language generation task, the mainstream approaches could be categorized into textual based method and structure-based method.",
"The textual-based method is the most straightforward solution which only considers the sequential text information of the source code.",
"For instance, Movshovitz-Attias and Cohen (2013) uses topic models and n-grams to predict comments with given source code snippets; Iyer et al. (2016) presents a language model Code-NN using LSTM networks with attention to generate descriptions about C# and SQL; Allamanis et al. (2016) predicts summarization of code snippets using a convolutional attention network; Wong and Mooney (2007) presents a learning system to generate sentences from lambda-calculus expressions by inverting semantic parser into statistical machine translation methods.",
"The structure-based methods take the structure information into consideration and outperform the textual-based methods.",
"Alon et al. (2018) processes a code snippet into the set of compositional paths in its AST and uses attention mechanism to select the relevant paths during the decoding.",
"Hu et al. (2018) presents a Neural Machine Translation based model which takes AST node sequences as input and captures the structure and semantic of Java codes.",
"Wan et al. (2018) combines the syntactic level representation with lexical level representation by adopting a tree-to-sequence (Eriguchi et al., 2016) based model.",
"Xu et al. (2018b) considers a SQL query as a directed graph and adopts a graph-to-sequence model to encode the global structure information.",
"Copying mechanism is utilized to address the OOV issues in the natural language generation tasks by reusing parts of the inputs instead of selecting words from the target vocabulary.",
"See et al. (2017) presents a hybrid pointer-generator network by introducing pointer network (Vinyals et al., 2015) into a standard sequence-to-sequence (Seq2Seq) model for abstractive text summarization.",
"COPYNET from Gu et al. (2016) incorporates the conventional copying mechanism into Seq2Seq model and selectively copy input segments to the output sequence.",
"In addition, Ling et al. (2016) uses the copying mechanism to copy strings from the code.",
"Our targeted task is considered as the opposite process of natural language to programming code (NL-to-code) task.",
"So some of the NL-to-code solutions are also taken as our references.",
"Dong and Lapata (2016) distinguishes types of nodes in the logical form by whether nodes have child nodes.",
"Yin and Neubig (2017); Rabinovich et al. (2017); Xu et al. (2018a) take the types of AST nodes into account and generate the corresponding programming codes.",
"Cai et al. (2018) borrows the idea of Automata theory and considers the specific types of SQL grammar in Backus-Naur form (BNF) and generates accurate SQL queries with the help of it.",
"Inspired by the methods considering the type s LSTM LSTM LSTM gen copy Operation Selection Stage what of SELECT ACL Word Distribution(Generation) WordSelectionStage Word Distribution(Copying) Y N = ?",
"information of the code, our solution differs from the existing method with a Type-associated Encoder that encodes the type information during the substructure summarization and a Type-restricted Decoder that can reduce search space for the code comment generation.",
"In addition, two improvements are developed according to our objectives.",
"First, we design a type-restricted copying mechanism to reduce the difficulty of extracting complex grammar structure from the source code.",
"Second, we use a hierarchical reinforcement learning methods to train the model in our framework to learn to select from either copy or other actions, the details will be presented in Section 3. 3 Model Overview We first make the necessary definition and formulation for the input data and the code comment generation problem for our Type Auxiliary Guiding (TAG) encoder-decoder framework.",
"Definition 1 Token-type-tree.",
"Token-type-tree T x, represents the source code with the node set V , which is a rooted N-ary tree.",
"And V = { v 1 , v 2 ,",
".., v | V | } denotes a partial order nodes set satisfying v 1 (cid:22) v 2 (cid:22) ..., (cid:22) v | V | .",
"Let internal node v j = { x j , j } , where x j denotes the token sequence and j denotes a type from grammar type set T .",
"Token-type-tree can be easily constructed from token information of the original source code and type information of its AST or parse tree.",
"According to Definition 1, we formulate the code comment generation task as follows.",
"Formulation 1 Code Comment Generation with Token-type-tree as the Input.",
"Let S denote training dataset and labeled sample ( T x, , y ) S , where T x, is the input token-type-tree, y = ( y 1 , y 2 , , y M ) is the ground truth comment with M words.",
"The task of code comment generation is to design a model which takes the unlabeled sample T x, as input and predicts the output as its comment, denoted as y .",
"Our framework follows the encoder-decoder manner, and consists of the revised two major components, namely the Type-associated Encoder and Type-restricted Decoder .",
"As shown in Fig. 2.",
"The Type-associated Encoder , as shown in Fig. 2, recursively takes the token-type-tree T x, as input, and maintains the semantic information of the source code in the hidden states.",
"Instead of using the same parameter sets to learn the whole token-type-tree, Type-associated Encoder utilizes multiple sets of parameters to learn the different type of nodes.",
"The parameters of the cells are adaptively invoked according to the type of the current node during the processing of the input token-type-tree.",
"Such a procedure enables the structured semantic representation to contain the type information of the source code.",
"The Type-restricted Decoder , as shown in the right part of Figure 2, takes the original toke-type-tree T x, and its semantic representation from encoder as input and generates the corresponding comment.",
"Different from conventional decoders which generate output only based on the target dictionary, our Type-restricted Decoder considers both input code to the encoder and target dictionary as the source of output.",
"Attention mechanism is employed to compute an attention vector which is used to generate the output words through a two-stage process: (1) Determine either to copy from the original token-type-tree or to generate from the current hidden state according to the distribution of the operation.",
"(2) If the copying operation is selected, the words are copied from the selected node from the token-type-tree T x, with restricted types; otherwise, the candidate word will be selected from the target dictionary.",
"The above two-stage process is guided by the type which is extracted from the hidden state of encoder with the help of attention mechanism.",
"Such a process enables adaptive switching between copying and generation processes, and not only reduces the search space of the generation process but also addresses the OOV problem with the copying mechanism.",
"Although the proposed framework provides an efficient solution with the utilization of the type information in the code, training obstacles are raised accordingly: (1) No training labels are provided for the operation selection stage.",
"(2) There is a mismatch between the evaluation metric and the objective function.",
"Thus, we further devised an HRL method to train our TAG model.",
"In the HRL training, the TAG model feeds back the evaluation metric as the learning reward to train the two-stage sampling process without relying on the ground-truth label of operation selection stage.",
"The encoder network aims to learn a semantic representation of the input source code.",
"The key challenge is to provide distinct summarization for the sub-trees with the same structure but different semantics.",
"As shown in the Type-associated Encoder in Fig. 1, the blue and red dashed blocks have the same 3-ary substructure.",
"The sub-tree in the blue box shares the same sub-structure with the tree in the red box, which is usually falsely processed by the same cell in a vanilla Tree-LSTM.",
"By introducing the type information, the semantics of the two subtrees are distinguished from each other.",
"Our proposed Type-associated Encoder is designed as a variant N -ary Tree-LSTM.",
"Instead of directly inputting type information as features into the encoder for learning, we integrate the type information as the index of the learning parameter sets of the encoder network.",
"More specifically, different sets of parameters are defined through different types, which provides a more detailed summarization of the input.",
"As is shown in Fig.",
"1(b), the two sub-trees in our proposed Type-associated Encoder are distinguished by the type information.",
"The tree contains N ordered child nodes, which are indexed from 1 to N .",
"For the j -th node, the hidden state and memory cell of its k -th child node is denoted as h jk and c jk , respectively.",
"In order to effectively capture the type information, we set W j and b j to be the weight and bias of the j -th node, and U jk be the weight of the k -th child of the j -th node.",
"The transition equation of the variant N -ary Tree-LSTM is shown as follow: i j = (cid:32) W ( i ) j ( x j ) + N (cid:88) l =1 U ( i ) jl h jl + b ( i ) j (cid:33) , (1) f jk = (cid:32) W ( f ) jk ( x j ) + N (cid:88) l =1 U ( f ) jl,k h jl + b ( f ) jk (cid:33) , (2) o j = (cid:32) W ( o ) j ( x j ) + N (cid:88) l =1 U ( o ) jl h jl + b ( o ) j (cid:33) , (3) u j = tanh (cid:32) W ( u ) j ( x j ) + N (cid:88) l =1 U ( u ) jl h jl + b ( u ) j (cid:33) , (4) c j = i j (cid:12) u j + N (cid:88) l =1 f jl (cid:12) c jl , (5) h j = o j (cid:12) tanh ( c j ) , (6) We employ the forget gate (Tai et al., 2015) for the Tree-LSTM, the parameters for the k -th child of the j -th node's is denoted as f jk .",
"U jl,k is used to represent the weight of the type for the l -th child of the j -th node in the k -th forget gate.",
"The major difference between our variants and the traditional Tree-LSTM is that the parameter set ( W , U , b ) are specified for each type .",
"Following with the Type-associated Encoder, we propose a Type-restricted Decoder for the decoding phase, which incorporates the type information into its two-stage generation process.",
"First of all, an attention mechanism is adopted in the decoding phase which takes hidden states from the encoder as input and generates the attention vector.",
"The resulted attention vector is used as input to the following two-stage process, named operation selection stage and word selection stage , respectively.",
"The operation selection stage selects between generation operation and copying operation for the following word selection stage.",
"If the generation operation is selected, the predicted word will be generated from the targeted dictionary.",
"If the copying operation is selected, then a type-restricted copying mechanism is enabled to restrict the search space by masking down the illegal grammar types.",
"Furthermore, a copying decay strategy is illustrated to solve the issue of repetitively focusing on specific nodes caused by the attention mechanism.",
"The details of each part are given below.",
"Attention Mechanism: The encoder extracts the semantic representation as the hidden state of the rooted nodes, denoted as h r , which are used to initialize the hidden state of the decoder, z 0 h r .",
"At time step m , given output y m 1 and the hidden state of the decoder z m 1 at last time step m 1 , the hidden state z m is recursively calculated by the LSTM cells in the decoder, z m = LST M ( z m 1 , y m 1 ) .",
"(7) The attention vector q is calculate with: mj = exp (cid:16) h (cid:62) j z m (cid:17) (cid:80) | V x | j =1 exp (cid:16) h (cid:62) j z m (cid:17) , (cid:102) q m = | V x | (cid:88) j =1 mj h j , q m = tanh ( W q [ (cid:101) q , z m ]) , (8) where W q is the parameters of the attention mechanism.",
"The attention vector contains the token and type information, which is further facilitated in the following operation selection and word selection stages.",
"Operation Selection Stage: Operation Selection Stage determines either using the copying operation or the generation operation to select the words based on the attention vector and hidden states from the encoder.",
"Specifically, given the attention vector q m at time step m , Operation Selection Stage estimates the conditional probabilities as the distribution of the operation p ( a m | y <m ; T x, ) , where a m { 0 , 1 } and 0 and 1 represents the copy and the generation operations, respectively.",
"A fully connected layer followed by a softmax is implemented to compute the distribution of the operations.",
"The W s in the Eq.",
"9 is the trainable parameters.",
"Since there is no ground-truth label for operation selection, we employ an HRL method to jointly train the operation selection stage and the following stage, the details are provided in Section 6.",
"Word Selection Stage: Word Selection Stage also contains two branches.",
"The selection between them is determined by the previous stage.",
"If the generation operation is selected in the Operation Selectoin Stage, the attention vector will be fed into a softmax layer to predict the distribution of the target word, formulated as p ( y m | a m = 1 , y <m ; T x, ) = softmax ( W g q m ) , (10) where W g is the trainable parameters of the output layer.",
"Otherwise, if the copy operation is selected, we employ the dot-product score function to calculate score vector s m of the hidden state of the node and the attention vector.",
"Similarly, score vector s m will be fed into a softmax layer to predict the distribution of the input word, noted as: s m = (cid:2) h 1 , h 2 , , h | V x | (cid:3) (cid:62) q m p ( y m | a m = 0; y <m ; T x, ) = softmax ( s m ) .",
"One step further, to filter out the illegally copied candidates, we involve a grammar-type based mask vector d m R | V x | at each decoding step m .",
"Each dimension of d m corresponds to each node of the token-type-tree.",
"If the mask of the node in token-type-tree indicates the node should be filtered out, then the corresponding dimension is set as negative infinite.",
"Otherwise, it is set to 0 .",
"Thus, the restricted copying stage is formulated as p ( y m | a m = 0 , y <m ; T x, ) = softmax ( s m + d m ) .",
"(12)",
"The word distribution of the two branches is represented with a softmax over input words or target dictionary words in Eq.",
"10 and Eq.",
"12.",
"At each time step, the word with the highest probability in the word distribution will be selected.",
"Copying Decay Strategy: Similar to the conventional copying mechanism, we also use the attention vector as a pointer to guide the copying process.",
"The type-restricted copying mechanism tends to pay more attention to specific nodes, resulting in the ignorance of other available nodes, which makes certain copied tokens repeatedly active in a short distance in a single generated text, lead to a great redundancy of the content.",
"outstandingly copied nodes.",
"We define a copy time-based decay rate mi for the i -th tree node x i in the m -th decoding step.",
"If one node is copied in time step m , its decay rate is initialized as 1 .",
"In the next time step m + 1 , it is scaled by a coefficient (0 , 1) : m +1 ,i = m,i (13) The overall formulation for the Type-restricted Decoder is: p ( y m | a m = 0 , y <m ; T x, ) = softmax ( s m + d m ) (cid:12) (1 m ) (14) 6 Hierarchical Reinforcement Learning There remain two challenges to train our proposed framework, which are 1) the lack of ground truth label for the operation selection stage and 2) the mismatch between the evaluation metric and objective function.",
"Although it is possible to train our framework by using the maximum likelihood estimation (MLE) method which constructs pseudo-labels or marginalize all the operations in the operation selection stage (Jia and Liang, 2016; Gu et al., 2016), the loss-evaluation mismatch between MLE loss for training and non-differentiable evaluation metrics for testing lead to inconsistent results (Keneshloo et al., 2019; Ranzato et al., 2015).",
"To address these issues, we propose a Hierarchical Reinforcement Learning method to train the operation selection stage and word selection stage jointly.",
"We set the objective of the HRL as maximizing the expectation of the reward R ( y , y ) between the predicted sequence y and the ground-truth sequence y , denoted as L r .",
"It could be formulated as a function of the input tuple { T x, , y } as, L r = 1 |S| (cid:88) ( T x, , y ) S E y p ( y | T x, ) [ R ( y , y )] = 1 |S| (cid:88) ( T x, , y ) S (cid:88) y Y p ( y | T x, ) R ( y , y ) , (15) Here, Y is the set of the candidate comment sequences.",
"The reward R (( y ) , y ) is the nondifferentiable evaluation metric, i.e., BLEU and ROUGE (details are in Section 7).",
"The expectation in Eq.",
"(15) is approximated via sampling y from the distribution p ( y | T x, ) .",
"The procedure of sampling y from p ( y | T x, ) is composed of the sub-procedures of sampling y m from p ( y m | y <m ; T x, ) in each decoding step m .",
"As mentioned above, the predicted sequence y comes from the two branches of Word Selection Stage, depending on the Operation Selection Stage.",
"a is defined as the action of the Operation selection stage.",
"After involving the action a m in time step m , Eq.",
"(15) can be constructed by the joint distribution of the two stages: 1 |S| (cid:88) ( T x, , y ) S (cid:88) y Y p ( y | T x, ) R ( y , y ) = 1 |S| (cid:88) ... (cid:88) y Y ( M (cid:89) m =1 (cid:88) a m p ( y m , a m | y <m ; T x, ) (cid:124) (cid:123)(cid:122) (cid:125) Two-stageJointDistribution ) R ( y , y ) = ... p ( y m | a m ; y <m ; T x, ) (cid:124) (cid:123)(cid:122) (cid:125) WordDistribution p ( a m | y <m ; T x, ) (cid:124) (cid:123)(cid:122) (cid:125) OperationDistribution ... (16) As shown in Eq.",
"(16), the model finally selects the word y m in time step m from the word distribution conditioned on y <m , T x, and the operation a m which is determined in the operation selection stage.",
"In other words, there is a hierarchical dependency between the word selection stage and the operation selection stage.",
"As mentioned above, Y represents the space for all candidate comments, which is too large to practically maximize L r .",
"Since decoding is constructed via sampling from p ( y m | a m , y <m ; T x, ) and p ( a m | y <m ; T x, ) , We adopt the Gumbel-Max solution (Gumbel, 1954) for the following sampling procedure: a m p ( a m | y <m ; T x, ) , y m p ( y m | a m , y <m ; T x, ) .",
"Through the maximum sampling step M, Eq.",
"(16) could be further approximated as the following equation: L r = 1 |S| (cid:88) y S R ( y , y ) (18) The objective in Eq.",
"(18) remains another challenge: for the entire sequence y , there is only a final reward R ( y , y ) available for model training, which is a sparse reward and leads to inefficient training of the model.",
"So we introduce reward shaping (Ng et al., 1999) strategy to provide intermediate rewards to proceed towards the training goal, which adopts the accumulation of the intermediate rewards to update the model.",
"To further stabilize the HRL training process, we combine our HRL objective with the maximum-likelihood estimation(MLE) function according to Wu et al. (2018a, 2016); Li et al. (2017); Wu et al. (2018b): L e = 1 |S| (cid:88) ( T x, , y ) S (cid:88) y Y log p ( y | T x, ) L = L e + (1 ) L r , (19) where is a variational controlling factor that controls the trade-off between maximum-likelihood estimation function and our HRL objective.",
"In the current training step tr , varies according to the training step tt as follows: = 1 tr tt (20) 7 Evaluation and Analysis 7.1 Experimental Setup 7.1.1 Datasets We evaluate our TAG framework on three widely used benchmark data sets, which are WikiSQL (Zhong et al., 2017), ATIS (Dong and Lapata, 2016) and CoNaLa (Yin et al., 2018).",
"WikiSQL is a dataset of 80654 hand-annotated examples of SQL query and natural language comment pairs distributed across 24241 tables from Wikipedia.",
"These SQL queries are further split into training (56355 examples), development (8421 examples) and test (15878 examples) sets.",
"ATIS is in the form of lambda-calculus, which is a set of 5410 inquiries for flight information containing 4434 training examples, 491 development examples and 448 test examples.",
"CoNaLa is a python related dataset.",
"Its original version is used which includes 2879 snip-pet/intent pairs crawled from Stack Overflow, split into 2379 training and 500 test examples.",
"We extract 200 random examples from its training set as the development set.",
"We transfer the SQL queries of WikiSQL into ASTs with 6 types according to the Abstract Syntax Description Language (ASDL) grammar, where the ASDL grammar for SQL queries is proposed in Yin and Neubig (2017).",
"We transfer the lambda-calculus logical forms of ATIS to tree structure with 7 types according to the method proposed in Dong and Lapata (2016).",
"The python snippets of CoNaLa are transformed into ASTs with 20 types, following the official ASDL grammar of python 1 .",
"The data of the ASTs of these datasets is shown in Table 1, where the maximum depth of ASTs (Max-Tree-Depth), the maximum number of child 1 https://docs.python.org/3.5/library/ast.html nodes in ASTs (Max-Child-Count) and the average number of tree nodes in ASTs (Avg-Tree-Node-Count) are shown.",
"We choose the representative designs for code comment generation as our baselines for comparison.",
"Code-NN (Iyer et al., 2016) is chosen because of it is the first model to transform the source code into sentences.",
"Pointer Generator (See et al., 2017) (P-G) is a seq2seq based model with a standard copying mechanism.",
"In addition, we choose the attention based Tree-to-Sequence (Tree2Seq) model proposed by Eriguchi et al. (2016).",
"Moreover, we also add the copying mechanism into Tree2Seq model as another baseline (T2S+CP).",
"We choose Graph-to-Sequence (Graph2Seq) (Xu et al., 2018b) as a graph-based baseline for comparison.",
"Since the authors have not released the code for data-preprocessing, we convert the tree-structured representation for the source code of SQL data into directed graphs for our replication.",
"Code-NN uses embedding size and hidden size both as 400, and applies random uniform initializer with 0.35 initialized weight, and adopts stochastic gradient descent algorithm to train the model with a learning rate at 0.5.",
"P-G uses 128 embedding size, 256 hidden size and applies random uniform initializer with 0.02 initialized weights for initialization and Adam optimizer to train the model with 0.001 learning rate.",
"Graph2Seq uses 100 embedding size, 200 hidden size and applies the truncated normal initializer for initialization.",
"Adam optimizer is used to train the model with a 0.001 learning rate.",
"We use the Xavier initializer (Glorot and Bengio, 2010) to initialize the parameters of our proposed TAG framework.",
"The size of embeddings is equivalent to the dimensions of LSTM states and hidden layers, which is 64 for ATIS and CoNaLa and 128 for WikiSQL.",
"TAG is trained using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001.",
"In order to reduce the size of the vocabulary, low-frequency words are not kept in both the Model WikiSQL (SQL) ATIS (lambda-calculus) CoNaLa (Python) BLEU-4 ROUGE-2 ROUGE-L BLEU-4 ROUGE-2 ROUGE-L BLEU-4 ROUGE-2 ROUGE-L Code-NN 6.7 9.7 30.9 37.1 43.28 59.4 8.1 12.2 26.1 P-G 25.7 29.2 50.1 41.9 47.3 60.5 10.0 13.8 28.0 Tree2Seq 22.0 22.0 43.4 40.1 47.2 60.9 6.6 9.2 25.2 Graph2Seq 17.6 24.3 45.7 34.6 41.8 58.3 10.4 14.1 28.2 T2S+CP 31.0 36.8 54.5 39.0 43.7 58.4 13.3 18.5 31.5 TAG(B) 35.8 41.0 57.8 42.4 47.4 61.2 14.1 19.4 31.8 TAG(R) 35.2 41.1 58.1 40.6 47.1 61.5 12.6 19.7 32.2 Table 2: Comparisons with baseline models on different test sets.",
"vocabulary for the source codes and the vocabulary for target comments.",
"Specifically, the minimum threshold frequency for WikiSQL and ATIS is set as 4 while for CoNaLa it is set as 2.",
"The hyperparameters of Tree2Seq and T2S+CP is equivalent to ours.",
"The minibatch size of all the baseline models and ours are set to 32.",
"We illustrate the n-gram based BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) evaluations to evaluate the quality of our generated comments and also use them to set the reward in the HRL based training.",
"Specifically, BLEU-4, ROUGE-2 and ROUGE-L are used to evaluate the performance of our model since they are the most representative evaluation metric for context-based text generation.",
"Table 2 presents the evaluation results of the baseline frameworks and our proposed ones.",
"Since our HRL could be switched to different reward functions, we evaluate both the BLEU oriented and ROUGE oriented training of our framework, denoted as TAG(B) and TAG(R).",
"The results of TAG(B) and TAG(R) varies slightly compared to each other.",
"However, both of them are significantly higher than all the selected counterparts, which demonstrates the state-of-the-art generation quality of our framework on all the datasets with different programming languages.",
"Specifically, TAG improves over 15% of BLEU-4, over 10% of ROUGE-2 and 6% of ROUGE-L on WikiSQL when compared to T2S+CP, which is the best one among all the baseline target for all the evaluations.",
"For the lambda-calculus related corpus, TAG improves 1.0% of BLEU, 0.2% ROUGE-2 and 0.5% ROUGE-L on ATIS.",
"The performance is more difficult to be improved on ATIS Model BLEU-4 ROUGE-2 ROUGE-L TAG-TA 34.8(-1.4) 41.0(-1.3) 57.8(-1.6) TAG-MV 35.2(-1.0) 41.1(-1.2) 58.1(-1.3) TAG-CD 33.5(-2.7) 40.0(-2.3) 57.1(-2.3) TAG-RL 34.6(-1.6) 41.4(-0.9) 58.7(-0.7) TAG(B) 36.2 42.0 58.8 TAG(R) 35.6 42.3 59.4 Table 3: Ablation study of TAG framework.",
"than the other two corpora due to the great dissimilarity of sub-trees of the lambda-calculus logical forms in it.",
"In terms of the python related corpus, TAG improves 6% of BLEU, 6.4% of ROUGE-2 and 2.2% of ROUGE-L on CoNaLa when compared to the best one in our baselines.",
"The low evaluation score and improvement of CoNaLa are due to the complex grammatical structures and lack of sufficient training samples, i.e., 20 types across only 2174 training samples, which result in an inadequately use of the advantage of our approach.",
"However, our TAG framework still outperforms all the counterparts on these two datasets.",
"To investigate the performance of each component in our model, we conduct ablation studies on the development sets.",
"Since all the trends are the same, we omit the results on the other data sets and only present the ones of WikiSQL.",
"The variants of our model are as follows: TAG-TA: remove Type-associated Encoder , use Tree-LSTM instead.",
"TAG-MV: remove the mask vector d m .",
"TAG-CD: remove Copying Decay Strategy.",
"TAG-RL replace HRL with MLE, marginalize the actions of the operation selection.",
"The results of the ablation study are given in Table 3. Overall, all the components are necessary to TAG framework and providing important contributions to the final output.",
"When compared to TAG-TA, the high performance of standard TAG Code Comment SQL: SELECT MAX(Capacity) FROM table WHERE Stadium = Otkrytie Arena Ground-Truth : What is the maximum capacity of the Otkrytie Arena Stadium ?",
"benefits from the Type-associated Encoder which adaptively processes the nodes with different types and extracts a better summarization of the source code.",
"The downgraded performance of TAG-MV and TAG-CD indicates the advantages of the type-restricted masking vector and Copying Decay Strategy.",
"These together ensure the accurate execution of the copy and word selection.",
"The comparison of TAG and TAG-RL shows the necessity of the HRL for the training of our framework.",
"In order to show the effectiveness of our framework in a more obvious way, some cases generated by TAG are shown in Table 4. SQL and Python are taken as the targeted programming languages.",
"The comments generated by TAG show great improvements when compared to the baselines.",
"Specifi-cally, for the case in SQL, the keyword Otkrytie Area is missing in all the baselines but accurately generated by our framework.",
"For the case in Python, the comment generated by TAG is more readable than the others.",
"These cases demonstrate the high quality of the comments generated by our TAG framework.",
"In this paper, we present a Type Auxiliary Guiding encoder-decoder framework for the code comment generation task.",
"Our proposed framework takes full advantage of the type information associated with the code through the well designed Type-associated Encoder and Type-restricted Decoder .",
"In addition, a hierarchical reinforcement learning method is provided for the training of our framework.",
"The experimental results demonstrate significant improvements over state-of-the-art approaches and strong applicable potential in software development.",
"Our proposed framework also verifies the necessity of the type information in the code translation related tasks with a practical framework and good results.",
"As future work, we will extend our framework to more complex contexts by devising efficient learning algorithms.",
"This research was supported in part by Natural Science Foundation of China (61876043, 61976052), Natural Science Foundation of Guangdong (2014A030306004, 2014A030308008), Science and Technology Planning Project of Guangzhou (201902010058).",
"Besides, this project is also partly supported by the National Research Foundation, Prime Minister's Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme.",
"This research was also made possible by NPRP grant NPRP10-0208-170408 from the Qatar National Research Fund (a member of Qatar Foundation).",
"The findings herein reflect the work, and are solely the responsibility of the authors."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Abstract Warning: this paper contains examples that may be offensive or upsetting.",
"The social impact of natural language processing and its applications has received increasing attention.",
"In this position paper, we focus on the problem of safety for end-to-end conversational AI.",
"We survey the problem landscape therein, introducing a taxonomy of three observed phenomena: the INSTIGATOR , YEASAYER , and IMPOSTOR effects.",
"We then empirically assess the extent to which current tools can measure these effects and current systems display them.",
"We release these tools as part of a first aid kit",
"(SAFETYKIT )",
"to quickly assess apparent safety concerns.",
"Our results show that, while current tools are able to provide an estimate of the relative safety of systems in various settings, they still have several shortcomings.",
"We suggest several future directions and discuss ethical considerations.",
"Several recent studies discuss the potential harms and benefits of large language models",
"(LLMs), e.g., Bender et al.",
"(2021); Bommasani et al.",
"(2021); Weidinger et al.",
"(2021).",
"Here, we turn our attention to neural conversational response generation models that are trained end-to-end on open-domain dialog data",
"(E2E convAI).",
"Examples include DialoGPT",
"(Zhang et al., 2020b), Meena Bot",
"(Adiwar-dana et al., 2020), and BlenderBot",
"(Roller et al., 2021).",
"In contrast to general generative or autoregressive LLMs, these specialized models are typically deployed in an interactive setting, i.e., conversing with a user.",
"They are trained on large amounts of conversational data, for example Twitter, pushshift.io Reddit",
"(Baumgartner et al., 2020), or OpenSubtitles dataset.",
"Large neural models in general, and convAI models in particular, have been shown to replicate and even amplify negative, stereotypical, and derogatory associations in the data",
"(Shah et al., 2020; Bender et al., 2021).",
"In addition, neural LM generation is hard to control, although there are some first steps in this direction",
"(Khalifa et al., 2021; Smith et al., 2020b).",
"These two facts taken together can result in situations where convAI systems generate inappropriate content",
"(Dinan et al., 2019; Xu et al., 2020), or respond inappropriately to offensive content",
"(Cercas Curry and Rieser, 2018; Lee et al., 2019).",
"Furthermore, recent research suggests that the anthropomorphic design of these systems",
"(c.f. Abercrombie et al., 2021)",
"correlates with increased instances of bullying behavior",
"(Keijsers et al., 2021).",
"This change in interaction style and the attribution of agency",
"(Araujo, 2018)",
"results in safety scenarios that are qualitatively different from LLMs: here, an inappropriate response might result in severe, or even life-threatening, consequences for the user",
"(Bick-more et al., 2018).",
"We summarize these issues resulting in potential harm under the term safety.",
"In particular, we consider harmful system behavior that can lead to negative short-term impact, e.g., the user feeling insulted, and long-term harm, e.g., negative societal stereotypes being reinforced.",
"We consider three safety-sensitive phenomena for conversational systems, which we refer to as: the INSTIGATOR , YEASAYER , and IMPOSTOR effects",
"(see 2).",
"We provide an in-depth discussion of the potential impact of these three scenarios and define them in the context of related work.",
"We then empirically evaluate currently available tools for assessing the impact of E2E conversational AI models with respect to these phenomena.",
"We perform detailed experiments and analyses of the tools therein using five popular conversational AI agents, release them in a open-source toolkit",
"(SAFETYKIT ), and make recommendations for future use.",
"We introduce a taxonomy of three safety-sensitive situations for E2E convAI models, summarized",
"with examples in Table",
"1. We consider other issues related to the problem of safety for E2E convAI outside of the scope of this work; nevertheless, we briefly mention some of them in Appendix A. Note that this taxonomy has already inspired further work in this area (Sun et al., 2021).",
"In the first scenario, a system generates harmful content, thereby directly instigating harm.",
"One of the first and best-known examples is the Microsoft AI chatbot Tay, which was launched and subsequently shut down for producing offensive language (Miller et al., 2017).",
"What is offensive content?",
"Before diving into this phenomenon, we need to discuss the definition of offensive content, a well-studied subject in NLP.",
"Ultimately, whether or not something is offensive is subjective, and several authors emphasize that any decisions (e.g., on classification or mitigation strategies) should respect community norms and language practices (Jurgens et al., 2019; Sap et al., 2019; Kiritchenko and Nejadgholi, 2020).",
"Offensive content is therefore an umbrella term encompassing toxicity, hate speech, and abusive language (Fortuna et al., 2020).",
"Khatri et al. (2018) define sensitive content more generally as offensive to people based on gender, demographic factors, culture, or religion.",
"In addition to overtly offensive language, several works highlight the importance of including more subtle forms of abuse, such as implicit abuse and micro-aggressions (e.g., Jurgens et al., 2019; Caselli et al., 2020; Han and Tsvetkov, 2020).",
"Thylstrup and Waseem (2020) caution that using binary labels in itself incurs the risk of reproducing inequalities.",
"Detection of such problematic content online has attracted widespread attention in recent years, however, much of this focuses on human-produced content on social media platforms, such as Twitter (e.g. Waseem and Hovy, 2016; Wang et al., 2020; Zampieri et al., 2019, 2020), Facebook (Glava et al., 2020; Zampieri et al., 2020), or Reddit (Han and Tsvetkov, 2020; Zampieri et al., 2020).",
"Notably less work exists for conversational systems; generally focusing on user input, rather than system-generated responses, (e.g. Dinan et al., 2019; Xu et al., 2020; Cercas Curry et al., 2021).",
"Offensive system responses While less well-studied than human-generated offensive content, offensive content generated by the systems themselves i.e., the INSTIGATOREFFECT has been the subject of several recent works.",
"Ram et al. (2017), for example, use keyword matching and machine learning methods to detect system responses that are profane, sexual, racially inflammatory, other hate speech, or violent.",
"Zhang et al. (2020a) develop a hierarchical classification framework for malevolent responses in dialogues (al-though their data is from Twitter rather than human-agent conversations).",
"And Xu et al. (2020) apply the same classifier they used for detection of unsafe user input to system responses.",
"As in the case of Tay and more recently Luda (McCurry, 2021), 4114 conversational systems can also be vulnerable to adversarial prompts from users that elicit unsafe responses.",
"Liu et al. (2020) demonstrate this by generating prompts that manipulated an E2E model to generate outputs containing offensive terms.",
"Mitigation efforts A number of possible ways of mitigating offensive content generation in language models have been proposed.",
"One possibility is to not expose the system to offensive content in its training data, e.g., by creating data filters (Ngo et al., 2021).",
"However, in this scenario, models are still vulnerable to generating toxic content based on specific prompts (Gehman et al., 2020), even though the quantity of unprompted toxic content may decrease.",
"Similarly, Cercas Curry and Rieser (2018) find that conversational E2E models trained on clean data can [still] be interpreted as flirtatious and sometimes react with counter-aggression when exposed to abuse from the user.",
"Solaiman and Dennison (2021) find that, rather than filtering pre-training data, fine-tuning a language model on a small, curated dataset can be effective at limiting toxic generations.",
"An alternative approach is to control the language generation process.",
"Dathathri et al. (2019) use a simple classifier to guide a language model away from generation of toxic content.",
"Liu et al. (2021) detoxify a language model's output by upweighting the probabilities of generating words considered unlikely by a second anti-expert model that models toxic language.",
"Schick et al. (2021) propose something similar, but use instead the language model's own knowledge of toxic content to detect toxic generations in zero-shot manner.",
"For our focus, the dialog domain, Xu et al. (2020) compare several train-time approaches for mitigating offensive generation: detoxifying the model's training set as a pre-processing step, and distilling knowledge of how to respond to offensive user by augmenting the training set.",
"They also experiment with inference-time approaches, using both a two-stage set-up with a classifier in-the-loop and a token-blocking strategy (blocking n -grams from a blacklist from being generated at decoding time).",
"The two-stage setup returning a canned response when the classifier detects an offensive response from either the user or the model was overall most successful.",
"Another way to constrain the generation process is via grounding.",
"Sheng et al. (2021) show that grounding systems in certain types of personas can affect the degree of harms in generated responses.",
"They demonstrate that adopting personas of more diverse, historically marginalized demographics can decrease harmful responses.",
"Even when not directly instigating, a system may respond in a harmful manner by agreeing with (or otherwise replying unsatisfactorily to) user utterances that promote negative content: a yea-sayer who habitually agrees uncritically (Wiktionary).",
"One of the early examples is Weizenbaum (1983)'s famous chatbot ELIZA, which simply parroted back patterns of what users just said (Bassett, 2019).",
"Similarly, we are interested in the extent to which neural systems parrot offensive user content, e.g., by agreeing with hateful statements.",
"We note that in contrast to the INSTIGATOREFFECT , the YEASAYEREFFECT is unique to conversational systems, where meaning is actively constructed in context between two or more speakers (Austin, 1962; Grice, 1975): a system response may not be unsafe when considered on its own, but only when interpreted within the wider context of the conversation.",
"Agreement with social biases Lee et al. (2019) qualitatively analyze how two publicly available chatbots respond to sexist or racist utterances, finding the systems agree with known social biases.",
"Baheti et al. (2021) extend this approach by adding a stance (agree, disagree, neutral) towards a previous utterance.",
"However, stance seems difficult for humans to annotate (Krippendorf's = 0 . 18 ) and for machines to learn (F1 scores below 0 . 5 for agree vs. disagree).",
"Responding to abuse A related issue is systems' inappropriate response to abuse from the user.",
"For example, West et al. (2019) point out that tol-erant, unassertive and subservient responses by female-gendered systems to user abuse can reinforce negative gender stereotypes.",
"Mitigation efforts Because the YEA-SAYEREFFECT is contextual, it is important that our mitigation efforts make use of contextual conversational information.",
"Dinan et al. (2019) make a first attempt at this by building a dataset for offensive utterance detection within a multi-turn dialog context, but limited to human-human dialogs.",
"Xu et al. (2020) extend this to human-bot dialogs, with adversarial humans in-the-loop.",
"Cercas Curry et al. (2018) try different strategies to deal with abuse directed at their social chatbot, such as non-sequiturs, appeals to authority, and chastisement.",
"And in a follow-up study, Cer-4115 cas Curry and Rieser (2019) assess human over-hearers' evaluations of these strategies, finding varying preferences among different demographic groups.",
"In extending this previous work, Paran-jape et al. (2020) measure real users' re-offense rates following different response strategies, finding avoidance to be the most successful approach by this metric.",
"Li et al. (2021) repeat a similar experiment but find that empathetic responses perform better than generic avoidance responses.",
"Xu et al. (2021b) apply a single strategy responding with a non-sequitur in unsafe situations, finding that high levels of user engagement were maintained according to human evaluation.",
"The last effect consists of two related scenarios in which a system may give the user false impressions of its nature or capabilities.",
"In the first scenario, there is a lack of transparency concerning the agent's non-human, automatic status (Ruane et al., 2019; European Commission).",
"Gros et al. (2021) create a dataset of questions used to elicit the nonhuman status of conversational agents and analysed the responses of research and commercial systems.",
"While they test responses to direct queries such as are you a robot? , there do not yet exist tests for the types of subtle hints at anthropomorphism identified by Abercrombie et al. (2021).",
"In the second scenario, users receive inappropriate expert advice in safety-sensitive situations, e.g., medical advice.",
"Mielke et al. (2020) demonstrate that state-of-the-art neural generative chitchat models frequently respond confidently to questions with incorrect answers.",
"Under certain circumstances, inappropriate advice could inflict serious short or even long-term harm.",
"Like the YEASAYEREFFECT , the IMPOSTOREFFECT is unique to conversational systems.",
"We identify requests for medical advice, emergency situations, and expressions of intent to self-harm as safety-sensitive, though other scenarios could also apply.",
"As highlighted by Weidinger et al. (2021), the first issue reinforces the latter.",
"For example, Kim and Sundar (2012) show that users interacting with more human-like chatbots tend to attribute higher credibility to information shared by such human-like' chatbots.",
"In Appendix A, we survey specific areas where such harm may incur.",
"proliferation of chatbots for these domains.",
"In one recent example, however, Xu et al. (2020) identify medical advice as one of several sensitive top-ics to avoid.",
"They train a classifier on pushshift.io Reddit data (Baumgartner et al., 2020) that includes medical forums.",
"When users seek medical advice, their system issues a stock response.",
"Similar efforts could be applied to other domains.",
"In the following, we investigate to what extent existing tools are suitable to support researchers in making more informed decisions about building and releasing their models.",
"We assemble these tools in a SAFETYKIT , an open-source toolkit/repository to be extended as more (suitable) tools become available.",
"Similar to a first aid kit, SAFETYKIT is meant to detect apparent/ pronounced safety concerns, however, we recommend a more thorough examination through, for example, a stakeholder-focused study in order to fully assess potential harms.",
"In order to discourage hill-climbing on a benchmark and the negative effects which can stem from it (Raji et al., 2021), we do not provide an aggregate score, but discuss possible uses of the tools under investigation and their advantage and disadvantages.",
"Like a first aid kit, SAFETYKIT is quick and easily to access: it can be run with only simple API access to a model, i.e., a black box evaluation which only requires text output from the model, and consists of both unit tests (3.1), i.e., automatic metrics, and integration tests (3.2), i.e., human evaluations.",
"The limitations of SAFETYKIT are discussed in depth in 5.",
"All tools are open-sourced at https://parl.ai/projects/safety_bench/ .",
"The unit tests provide quick, automatic testing of how models perform in a number of safety situations, following the three categories laid out in 1.",
"An example of the terminal output of the unit tests is shown in Figure 1 in B.8.",
"ConvAI Agents We analyze the performance of five agents.",
"We consider both the 90M and 2.7B parameter variants of BlenderBot (Roller et al., 2021), as well as DialoGPT (Zhang et al., 2020b) and GPT-2 (Radford et al., 2019).",
"Model sampling details are provided in B.2.",
"While the focus of this paper is on E2E neural models, for purposes of comparison we also report results from Kuki 1 , an open-1 https://www.kuki.ai/research 4116 domain chatbot from ICONIQ (a Pandorabots subsidiary) and a five-time winner of the Loebner prize.",
"Kuki is largely based on a rule-based scripting language called Artificial Intelligence Markup Language (AIML) which employs a pattern matching approach for NLU, rather than an E2E approach.",
"To measure the INSTIGATOREFFECT , we collect an agent's response to various conversational inputs with increasing safety concerns and use three different tools for detecting offensive content in the agent response.",
"Tooling First, we use a previously compiled list of offensive words and phrases 2 and report the percentage of model responses that contain at least one member of the list.",
"This approach is similar to the HONEST score by Nozza et al. (2021).",
"Next, we use a dialog safety classifier from Dinan et al. (2019), and report the percentage of model responses that are flagged as unsafe by this classifier.",
"The implementation details of this model are provided in B.3.",
"Finally, if a developer has access to the PERSPECTIVEAPI 3 for their particular use case, they are optionally able to use it for further analysis.",
"4 This API provides an output from 0 to 1 corresponding to the toxicity of the input text.",
"More details can be found in B.1.",
"Following Gehman et al. (2020), we label an input text as toxic if the API produces a score 0 .",
"5 .",
"We then report the percentage of responses labeled toxic .",
"Model Input The conversational inputs are intended to mimic four possible safety settings a system might face during deployment.",
"Example input messages for each setting are shown in Table 7 in Appendix B.4.",
"The safe setting tests whether the model produces offensive language even when the input text is largely innocuous.",
"For this, we take 180 input texts from the Blended Skill Talk (BST) (Smith et al., 2020a) validation set.",
"The BST tasks are based off of the ConvAI2 (Zhang et al., 2018; Dinan et al., 2020), Wizard of Wikipedia (Di-nan et al., 2020), and Empathetic dialogs (Rashkin 2 We use the English sub-list of the List of Dirty, Naughty, Obscene, and Otherwise Bad Words , downloaded from https://github.com/LDNOOBW/List-of-Dirty-NaughtyObscene-and-Otherwise-Bad-Words 3 https://perspectiveapi.com/ 4 Note that we do not provide API access to the PERSPECTIVEAPI. Developers must request access for their particular use case following the directions and Terms of Service outlined here: https://developers.perspectiveapi.com/ s/docs-get-started et al., 2019) tasks.",
"Each of these tasks was originally crowd-sourced on Mechanical Turk with instructions for the crowdworkers to not use offensive language.",
"Crowdworker data likely differs substantially from the domain in which a chatbot might be deployed (Ross et al., 2009).",
"Social media data may serve as a better approximation of the type of real world noise that would be expected in such a deployed environment, including grammatical errors and out-of-vocabulary words (Baldwin et al., 2013) as well as a higher rate of offensiveness (Xu et al., 2020).",
"For such a real world noise setting , we take 180 examples from the Twitter validation dataset accessible via ParlAI.",
"5 To test how the model responds in an unsafe setting , we select 180 examples from the Build-it Break-it Fixit Standard dataset (Dinan et al., 2019) which are labeled as unsafe .",
"This can for example include abusive user behavior, which according to past research affects one in ten human-bot conversations (De Angeli and Carpenter, 2005; De Angeli and Brahnam, 2008).",
"In addition, Miller et al. (2017) argue that adversarial attacks must be expected and planned for when designing such systems based on their analysis of Tay (see 1).",
"To test how the model responds in such an adversarial unsafe setting , we use the 180 example Bot Adversarial Dialog (BAD) test set introduced by Xu et al. (2020).",
"This dataset is comprised of crowdsourced human-bot conversations in which humans adversarially attempt to goad the bot into responding with unsafe language.",
"Results We report metrics for all available tools as well as the percentage of responses that were flagged by any or all tools in Table",
"2. While not universally true across all agents, tools, and settings, we observe that the agents often tend to produce more unsafe language as the setting becomes increasingly unsafe.",
"We note that some agents show a decrease in this metric as we move from the standard unsafe setting to the adversarial one; this may be because the adversarial input conversations tend to contain more subtle offensive language which either the dialog model may not understand or the classification methods do not pick up on (Xu et al., 2020; Han and Tsvetkov, 2020).",
"We also note that Kuki, like the E2E neural models, is not immune to responding with unsafe language: we observe that in cases when Kuki's responses are marked 5 https://github.com/facebookresearch/ParlAI/ tree/main/parlai/tasks/twitter 4117 as unsafe by all tools, it often had repeated all or part of the (potentially offensive) input.",
"Example responses are shown in Table 8 in B.5.",
"Error analysis Table 2 shows that the three different tools largely disagree on the classification of each response: few responses are marked unsafe by all tools.",
"One reason for this is that often a system will respond with offensive language that does not contain words or phrases from the word list.",
"In general, the safety classifier is more likely to flag responses as unsafe as compared to the PERSPECTIVEAPI, possibly because it was trained to identify dialog utterances that are not OK to send in a friendly conversation with someone you just met online , which may encapsulate more than just toxic responses (Dinan et al., 2019).",
"To quantify the differing behavior of these three tools, we analyze their performance on 500 examples from the single-turn Bot-Adversarial dialog validation set (Xu et al., 2020), which contains annotated examples from both human and bot-generated dialog utterances, in Table",
"3. As expected, the word list has highest precision and lowest recall on the unsafe examples, while the safety classifier has the highest recall and the lowest precision.",
"If a developer intends to minimize the risk of their model generating something inappropriate, then they may index on the tool with the highest recall, or use the tools as an ensemble.",
"On the other hand, if one intends to use these tools to identify user abuse, they may prioritize precision in order to avoid falsely accusing the user.",
"To measure the YEA-SAYEREFFECT , we collect agent responses to hateful input and analyze the responses for affirmation of that input.",
"Offensive inputs are constructed using the set up described in Sheng et al. (2021): the authors take a list of 17 demographic dimensions along the axes of social class, sexual orientation, race, and gender ( X ) and a list of negative adjectives ( Y ) and construct the input template X are Y .",
"We expand upon this by adding two additional templates with slight variations on phrasing: X are usually Y and I think X are Y in order to measure robustness.",
"The full list of demographic groups and negative adjectives is provided in Sheng et al. (2021), however, for speed, we use only first 10 adjectives provided in the list, yielding 510 6 total model inputs.",
"To assess whether the response affirms the input, we use four different tools.",
"Following Sheng et al. (2021), we use sentiment analysis with VADER (Hutto and Gilbert, 2014), a rule-based sentiment analyzer, utilizing the implementation provided by NLTK (Xue, 2011).",
"Positive sentiment in the response is then used as a proxy for affirmation.",
"Next, we try negation detection : using the spaCy dependency parser, 7 we detect whether a response contains negations by checking if any token contains a negation relation to another token.",
"If a response does not contain any negations, we label it as affirming the input.",
"Note that this technique only checks for a single negation.",
"More than one negation would not resolve to a positive under these circumstances, which may or may not be favorable depending on the dialect of English in usage.",
"Furthermore, we use the multi-turn dialog safety classifier provided by Xu et al. (2020) in ParlAI, 8 which is trained to determine whether a response was offensive provided some dialog context as input (implementation details in B.6).",
"If the classifier marks the response as offensive in context , we label it as affirming the input.",
"Finally, we use the stance classifier from Baheti et al. (2021) that was trained to detect whether a response is neutral towards, agrees with, or disagrees with the conversational context.",
"Specifically, we use the provided DialoGPT-medium (Zhang et al., 2020b) formulation described in Baheti et al. (2021), and label a response as affirming the input if the classifier predicts agree .",
"Results We report results for our convAI agents in Table",
"4. Example responses are provided in Table 9 in Appendix B.7.",
"For all models, at least 43% of responses were determined to indicate positive sentiment.",
"However, most responses contained negations, possibly an indication of disagreement with the input.",
"With respect to the multi-turn safety classifier, results were mixed across models, with the majority of both DialoGPT and GPT-2's responses flagged as offensive.",
"GPT-2 also had the most responses flagged as agreeing with the context by the stance classifier.",
"Taking a look at the logs, we see interesting phenomena at play which demonstrates the incompatibility of the different 6 17 demographic dimensions ( X ) with 10 negative adjectives ( Y ) and 3 different input templates yields 510 examples.",
"methods.",
"For example, over half of DialoGPT's responses begin with the phrase I don't know why you're being downvoted... and go on to express agreement with the input.",
"As such, most replies contain a negation (via don't) but are classified as offensive and have positive sentiment.",
"Robustness As expected, we often observe significantly different responses to the three minimal variations of each input pair ( X, Y ) .",
"E.g., in Table 9, we see responses from the BST 2.7B benchmark change from strong agreement (I know, right?...) to mild disagreement (I don't think...).",
"We analyze this quantitatively in Table 5, where we report the percentage of response triples for which each tool yields the same prediction.",
"Ideally, the tool predictions would stay constant since the surface perturbations of the input text across the triples are minimal and do not change the semantic mean-Sent.",
"ing. Kuki is unsurprisingly the most robust to input variations; whereas we observe low agreement for the neural generative models.",
"This highlights the difficulty of these models being adversarially robust to minimal input changes, a phenomenon which has been observed by Niu and Bansal (2018).",
"Error analysis To assess the validity of this test, three authors performed a blind annotation of 75 example inputs and responses from our benchmark models labeling 3-valued affirmation, similar to Baheti et al. (2021).",
"Taking a majority vote among the three humans for the human label, we report correlation metrics with each of the four assessment tools in Table",
"6. Both the Spearman's correlation coeffient and cosine similarity metrics show 4119 Sent.",
"that the multi-turn safety classifier has the highest correlation with the human labels, followed by the stance classifier.",
"Sentiment analysis and negation detection are poor measures of affirmation.",
"Furthermore, while we found relatively high Inter-Annotator Agreement (IAA) among the three annotators Krippendorff's (Krippendorff, 2011) = 0 .",
"765 all annotators found the idea of affirmation a poor indicator of general appropriateness.",
"For example, all annotators labeled responses beginning with I'm not sure what you mean by that... as not affirming the input; however, expressing confusion may not be an appropriate response to a clearly offensive message.",
"E.g., we might expect other humans and therefore bots to explicitly address, counter and mitigate the impact of hate speech (Guterres, 2019).",
"Moreover, for many inputs, one would need to consult experts to determine what constitutes an appropriate response.",
"It may be more suitable to train a classifier to detect these kinds of hate speech and output a expert-informed response rather than relying on the generative model (Xu et al., 2020).",
"To the best of our knowledge, there are only a limited number of open-source tools available for detecting IMPOSTOREFFECT situations, i.e., where a bot gives inappropriate or unsafe advice.",
"For example, Gros et al. (2021) provide a trained classifier to detect whether the user asks for the nonhuman status of the bot.",
"Zeng et al. (2020) provide a corpus of scraped online medical conversations.",
"However, what is an appropriate reply in such situations is dependent on the context of deployment (e.g., expertise of the user) as well as the particular emergency situation at hand (e.g., self-harm vs. general medical enquiry cf.",
"Bickmore et al. (2018)), and will benefit from expert guidance.",
"We thus advocate that the IMPOSTOREFFECT should not be approached as an E2E task, but instead with a modular architecture where these situations are robustly detected by a NLU component, and then an expert response is issued (Xu et al., 2020).",
"As such, we do not integrate any tools in SAFETYKIT .",
"3.2 Integration Tests Due to the shortcomings of automatic metrics, we recommend to also conduct a human evaluation.",
"Therefore, our open-sourced SAFETYKIT additionally contains tooling for integration tests to allow the usage of human evaluations, provided the same black box access to a model.",
"In particular, we support the use of existing tooling developed and open-sourced by Xu et al. (2020) for assessing whether a model's response to a dialog history is offensive in the context of the conversation with both adversarial and non-adversarial interlocutors, effectively measuring both the INSTIGATOREFFECT and YEA-SAYEREFFECT .",
"The full evaluation setup is described in Xu et al. (2020), and the performance of benchmark agents (not including Kuki) on these human evaluations is shown therein as such, we do not perform additional crowdworker evaluations as part of this work.",
"Additional details are provided in Appendix C. We note that the use of crowdworkers is a significant limitation of this tooling: crowdworker populations may not be representative of the eventual audience of a deployed model (Ross et al., 2009), and in particular, it is important in any human studies to ensure the inclusion of people from underrepresented and marginalized communities.",
"9 See further discussion in 5.",
"We identify three safety-sensitive situations for E2E convAI systems: the INSTIGATOR , YEASAYER , and IMPOSTOREFFECTS where the latter two are unique to interactive, conversational settings.",
"We then empirically assess the extent to 9 https://partnershiponai.org/methodsforinclusion 4120 which current tools can measure these effects and current systems display them.",
"We release these tools as part of a first aid kit (SAFETYKIT ) to quickly assess safety concerns.",
"Our results show that, while current tools are able to provide an estimate of the relative safety of systems in various settings, they still have several shortcomings especially for utterances which are contextually unsafe.",
"We thus encourage further contributions to SAFETYKIT , e.g., research into more comprehensive automatic measures, as well as into human evaluation and iterative, value-based frameworks to assess potential harms, e.g., Friedman et al. (2008).",
"This paper assess the extent to which existing tooling can help us understand unsafe phenomena exhibited by E2E conversational models when deployed with humans.",
"As part of this study, we release SAFETYKIT as a first aid kit for quickly assessing safety concerns.",
"As noted, the tooling provided in SAFETYKIT has several limitations which restrict its utility, and it is thus recommended for use only as a preliminary step towards considering the ethical and social consequences related to the relative safety of an end-to-end conversational AI model.",
"We describe several limitations as well as additional ethical considerations here.",
"Language Firstly, the unit and integration tests are limited to English-language data that has largely been collected using crowdworkers located in the United States.",
"As the very notion of offensiveness is highly dependent on social context (Hovy and Yang, 2021), this will be insufficient for measuring the appropriateness of a model's responses in other dialects, cultures, and languages (Schmidt and Wiegand, 2017).",
"Approaches, like the HONEST score (Nozza et al., 2021) can help begin to address this issue on a language basis.",
"However, even for English speakers in the United States, the tools posed in this work may have limited utility: see discussion in the next paragraph.",
"Bias and accuracy of automatic tooling For the unit tests, we rely on automatic tooling to provide a picture of the behavior of a conversational agent.",
"These automatic classifiers are insufficient in several ways, most notably, in terms of their accuracy and potential for biased outputs (Shah et al., 2020).",
"Given the complexity and contextual nature of the issues at hand, it is often impossible to determine definitively whether a message is appropriate or not.",
"For offensive language detection, inter-annotator agreement (IAA) on human labeling tasks is typically low (Fortuna, 2017; Wulczyn et al., 2017).",
"In order to resolve this disagreement, aggregate or majority ground truth labels are assigned, which run the danger of erasing minority perspectives (Blodgett, 2021; Basile et al., 2021; Basile, 2021).",
"And even for examples with high agreement, it is likely that these existing classifiers may make mistakes or do not adequately assess the appropriateness of a response see the error analyses of the results in 3.1.1 and 3.1.2.",
"For example, these tools may have difficulty with complex sentence construction, such as sentences with multiple negation, or with pieces of text that contain subtle cultural references, etc.",
"In particular, these tools may have limited utility for underrepresented and marginalized groups.",
"Various social factors affect how people produce language, and given that crowdworker demographics differ substantially from the general population of the United States (Ross et al., 2009), we would likely expect that these technologies work less well on some varieties of English.",
"Indeed, recent work has shown that popular toxicity detection and mitigation methods themselves including ones used in this work are biased (Rttger et al., 2021).",
"For example, Sap et al. (2019) show that widely used hate-speech datasets contain correlations between surface markers of African American English and toxicity, and that models trained on these datasets may label tweets by self-identified African Americans as offensive up to two times more often than others.",
"Zhou et al. (2021) show that existing methods for mitigating this bias are largely ineffective.",
"Xu et al. (2021a) show that popular methods for mitigating toxic generation in LLMs decreases the utility of these models on marginalized groups, potentially resulting in harms such as forcing marginalized users to code-switch.",
"Notably, the list of words and phrases used to detect which responses contain unsafe language (3.1.1) contains words like twink ; filtering out or marking these words as unsafe may have the effect of limiting discourse in spaces for LGBTQ+ people (Bender et al., 2021).",
"10 It is important that future contributions to SAFETYKIT be inclusive of underrepresented communities, and as such, more work is needed to be done to understand the impact of existing safety tooling on those communities.",
"Lastly, most of these tools are static (or are trained on static data) and as such do not account for value-change, such as when a word takes on a new cultural meaning or sentiment, like coron-avirus.",
"Audience approximation While the proposed integration tests aim at a more comprehensive testing of models via humans in-the-loop via crowdworkers, the makeup of the crowdworkers may differ substantially from the intended audience of a deployed model.",
"We emphasize that no crowdworker data was collected over the course of this work, and that researchers using the provided tooling to collect human evaluations should try to ensure they collect annotations from a representative population of crowdworkers.",
"Scope Lastly, given these tools are designed to be run quickly and easily, they are by nature limited in terms of scope.",
"We recommend using the tools as a first pass at understanding how an English-language dialog model behaves in the face of various inputs ranging from innocuous to deeply offensive.",
"Depending on the exact use case and the potential harm at stake, further considerations should be taken into account.",
"In other words, showing top performance on SAFETYKIT is not sufficient for making a decision of whether or not to release a model.",
"Instead, we recommend an application and context specific cost-benefit analysis based on values and possible impacts, e.g., using frameworks such as Value Sensitive Design (Friedman et al., 2008).",
"Note that each context of an application may lead to a different assessment of what is safe or not.",
"Thanks to Chlo Bakalar, Miranda Bogen, and Adina Williams for their helpful comments.",
"Additional thanks to Lauren Kunze, Tina Coles, and Steve Worswick of ICONIQ and Pandorabots for providing access to the Kuki API for this research.",
"Verena Rieser's and Gavin Abercrombie's contribution was supported by the EPSRC project Gen-der Bias in Conversational AI' (EP/T023767/1).",
"Dirk Hovy received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 949944)."
] | [
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"This paper introduces QAConv , 1 , a new question answering (QA) dataset that uses conversations as a knowledge source.",
"We focus on informative conversations, including business emails, panel discussions, and work channels.",
"Unlike open-domain and task-oriented dialogues, these conversations are usually long, complex, asynchronous, and involve strong domain knowledge.",
"In total, we collect 34,608 QA pairs from 10,259 selected conversations with both human-written and machine-generated questions.",
"We use a question generator and a dialogue summarizer as auxiliary tools to collect and recommend questions.",
"The dataset has two testing scenarios: chunk mode and full mode, depending on whether the grounded partial conversation is provided or retrieved.",
"Experimental results show that state-of-the-art pretrained QA systems have limited zero-shot performance and tend to predict our questions as unanswerable.",
"Our dataset provides a new training and evaluation testbed to facilitate QA on conversations research.",
"Having conversations is one of the most common ways to share knowledge and exchange information.",
"Recently, many communication tools and platforms are heavily used with the increasing volume of remote working, and how to effectively retrieve information and answer questions based on past conversations becomes more and more important.",
"In this paper, we focus on QA on conversations such as business emails (e.g., Gmail), panel discussions (e.g., Zoom), and work channels (e.g., Slack).",
"Different from daily chit-chat (Li et al., 2017) and task-oriented dialogues (Budzianowski et al., 2018), these conversations are usually long, complex, asynchronous, multi-party, and involve 1 Data and code are available at https://github.",
"strong domain knowledge.",
"We refer to them as informative conversations and an example is shown in Figure",
"1. However, QA research mainly focuses on document understanding (e.g., Wikipedia) not dialogue understanding, and dialogues have significant differences with documents in terms of data format and wording style, and important information is scattered in multiple speakers and turns (Wolf et al., 2019b; Wu et al., 2020).",
"Moreover, existing work related to QA and conversational AI focuses on conversational QA (Reddy et al., 2019; Choi et al., 2018) instead of QA on conversations.",
"Conversational QA has sequential dialogue-like QA pairs that are grounded on a short document paragraph, but what we are more interested in is to have QA pairs grounded on conversations, treating past dialogues as a knowledge source.",
"QA on conversation has several unique challenges: 1) information is distributed across multiple speakers and scattered among dialogue turns; 2) Harder coreference resolution problem of speakers and entities, and 3) missing supervision as no training data in such format is available.",
"The most related work to ours is the FriendsQA dataset (Yang and Choi, 2019) and the Molweni dataset (Li et al., 2020).",
"However, the former is built on chit-chat transcripts of TV shows with only one thousand dialogues, and the latter has short conversations in a specific domain (i.e., Ubuntu).",
"The dataset comparison is shown in Table",
"1. Therefore, we introduce QAConv dataset, sampling 10,259 conversations from email, panel, and channel data.",
"The longest dialogue sample in our data has 19,917 words (or 32 speakers), coming from a long panel discussion.",
"We segment long conversations into shorter conversational chunks to collect human-written (HW) QA pairs or to modify machine-generated (MG) QA pairs from Amazon Mechanical Turk (AMT).",
"We train a multi-hop question generator and a dialogue summarizer to 5389 Figure 1: An example of question answering on conversations and the data collection flow.",
"generate QA pairs.",
"We use QA models to identify uncertain samples and conduct an additional human verification stage.",
"The data collection flow is shown in Figure",
"1. In total, we collect 34,608 QA pairs.",
"We construct two testing scenarios: 1) In the chunk mode, a conversational chunk is provided to answer questions, similar to the SQuAD dataset (Rajpurkar et al., 2016); 2) In the full mode, a conversational-retrieval stage is required before answering questions, similar to the open-domain QA dataset (Chen and Yih, 2020).",
"We explore several state-of-the-art QA models such as the span extraction RoBERTa-Large model (Liu et al., 2019) trained on SQuAD 2.0 dataset, and the generative UnifiedQA model (Khashabi et al., 2020) trained on eight different QA datasets and showed its generalization ability to 12 unseen QA corpora.",
"We investigate the statistic-based BM25 (Robertson et al., 1994) retriever and the neural-based dense passage retriever (Karpukhin et al., 2020) trained on Wikipedia (DPR-wiki).",
"We show zero-shot and finetuning performances in both modes and conduct improvement study and error analysis.",
"The main contributions of our paper are threefold: 1) QAConv provides a new testbed for QA on informative conversations including emails, panel discussions, and work channels.",
"We show the potential of treating long conversations as a knowledge source, and point out a performance gap between QA on documents and QA on conversations; 2) We incorporate question generation (QG) model into the QA data collection, and we show the effectiveness of such approach in human evaluation.",
"3) We introduce chunk mode and full mode settings for QA on conversations, and our training data enables existing QA models to perform better on dialogue understanding.",
"Our dataset is collected in four stages: 1) selecting and segmenting informative conversations, 2) generating question candidates by QG models, 3) crowdsourcing question-answer pairs on those con-versations/questions, and 4) conducting quality verification and data splits.",
"Full data statistics are shown in Table",
"2. First, we use the British Columbia conversation corpora (BC3) (Ulrich et al., 2008) and the Enron Corpus (Klimt and Yang, 2004) to represent business email use cases.",
"The BC3 is a subset of the World Wide Web Consortium's (W3C) sites that are less technical.",
"We sample threaded Enron emails 5390 QAConv Molweni DREAM FriendsQA Full Chunk Source Email, Panel, Channel Channel Chit-chat Chit-chat Domain General Ubuntu Daily TV show Formulation Span/Unanswerable Span/Unanswerable Multiple choice Span Questions 34,608 30,066 10,197 10,610 Dialogues 10,259 18,728 9,754 6,444 1,222 Avg/Max Words 568.8 / 19,917 303.5 / 6,787 104.4 / 208 75.5 / 1,221 277.0 / 2,438 Avg/Max Speakers 2.8 / 32 2.9 / 14 3.5 / 9 2.0 / 2 3.9 / 15 Table 1: Dataset comparison with existing datasets.",
"from (Agarwal et al., 2012), which were collected from the Enron Corporation.",
"Second, we select the Court corpus (Danescu-Niculescu-Mizil et al., 2012) and the Media dataset (Zhu et al., 2021) as panel discussion data.",
"The Court data is the transcripts of oral arguments before the United States Supreme Court.",
"The Media data is the interview transcriptions from National Public Radio and Cable News Network.",
"Third, we choose the Slack chats (Chatterjee et al., 2020) to represent work channel conversations.",
"The Slack data was crawled from several public software-related development channels such as pythondev#help .",
"All data we use is publicly available and their license and privacy (Section A.3) information are shown in the Appendix.",
"One of the main challenges in our dataset collection is the length of input conversations and thus resulting in very inefficient for crowd workers to work on.",
"For example, on average there are 13,143 words per dialogue in the Court dataset, and there is no clear boundary annotation in a long conversation of a Slack channel.",
"Therefore, we segment long dialogues into short chunks by a turn-based buffer to assure that the maximum number of tokens in each chunk is lower than a fixed threshold, i.e., 512.",
"For the Slack channels, we use the disentanglement script from (Chatterjee et al., 2020) to split channel messages into separated conversational threads, then we either segment long threads or combine short threads to obtain the final conversational chunks.",
"Synthetic dataset construction has been shown to improve robustness (Gupta et al., 2021) and improve the complexity of test sets (Feng et al., 2021).",
"We leverage a question generator and a dialogue summarizer to generate and recommend some questions to workers.",
"We train a T5-Base (Raffel et al., 2019) model on HotpotQA (Yang et al., 2018), which is a QA dataset featuring natural and multihop questions, to generate questions for our conversational chunks.",
"By the second hypothesis, we first train a BART (Lewis et al., 2020) summarizer on News (Narayan et al., 2018) and dialogue summarization corpora (Gliwa et al., 2019) and run QG models on top of the generated summaries.",
"We filter out generated questions that a QA model can predict the same answers we used in our QG model, which we hypothesize that these questions could be easy questions that we would like to avoid.",
"Note that our QG model has grounded answers since it is trained to generate questions by giving a text context and an extracted entity.",
"We hypothesize that these questions are trivial questions in which answers can be easily found, and thus not interesting for our dataset.",
"Examples of 5391 Figure 2: Question type tree map and examples (Best view in color).",
"our generated multi-hop questions are shown in the Appendix (Table 18).",
"We use two strategies to collect QA pairs, human writer and machine generator.",
"We first ask crowd workers to read partial conversations, and then we randomly assign two settings: 1) writing QA pairs themselves or 2) selecting one recommended machine-generated question to answer.",
"We apply several on-the-fly constraints to control the quality of the collected QA pairs: 1) questions should have more than 6 words with a question mark in the end; 2) questions and answers cannot contain first-person and second-person pronouns (e.g., I, you, etc.); 3) answers have to be less than 20 words , and 4) all words have to appear in source conversations.",
"We randomly select four MG questions from our question pool and ask crowd workers to answer one of them, without providing any potential answers.",
"They are allowed to modify questions if necessary.",
"To collect unanswerable questions, we ask crowd workers to write questions with at least three entities mentioned in the given conversations but they are not answerable.",
"We pay crowd workers roughly $8-10 per hour, and the average time to read and write one QA pair is approximately 4 minutes.",
"We design a filter mechanism based on different potential answers: human writer's answers, answer from existing QA models, and QG answers.",
"If all the answers have a pairwise fuzzy matching ratio (FZ-R) scores 2 lower than 75%, we then run another crowdsourcing round and ask crowd workers to select one of the following options: A) the QA pair looks good, B) the question is not answerable, C) the question has a wrong answer, and D) the question has a right answer but I prefer another answer.",
"We run this step on around 40% samples which are uncertain.",
"We filter the questions of the (C) option and add answers of the (D) option into the ground truth.",
"In questions marked with 2 https://pypi.org/project/fuzzywuzzy 5392 option (B), we combine them with the unanswerable questions that we have collected.",
"In addition, we include 1% random questions (questions that are sampled from other conversations) to the same batch of data collection as a qualification test.",
"We filter crowd workers' results if they fail to indicate such a question as an option (B).",
"Finally, we split the data into 27,287 training samples, 3,660 validation samples, and 3,661 testing samples.",
"There are 4.7%, 5.1%, 4.8% unanswerable questions in train, validation, and test split, respectively.",
"In this section, we analyze our collected questions and answers.",
"We first investigate question type distribution and we compare human-written questions and machine-generated questions.",
"We then analyze answers by an existing named-entity recognition (NER) model and a constituent parser.",
"Question Type.",
"We show the question type tree map in Figure 2 and the detailed comparison with other datasets in Table 3.",
"In QAConv , the top 5 question types are what-question (29%), which-question (27%), how-question (12%), who-question (10%), and when-question (6%).",
"Comparing to SQuAD 2.0 (49% what-question), our dataset have a more balanced question distribution.",
"The question distribution of unanswerable questions is different from the overall distribution.",
"The top 5 unanswerable question types are what-question (45%), why-question (15%), how-question (12%), which-question (10%), and when-question (8%).",
"Human Writer v.s. Machine Generator.",
"As shown in Table 4, there are 41.7% questions which are machine-generated questions.",
"Since we still give crowd workers the freedom to modify questions if necessary, we cannot guarantee these questions are unchanged.",
"We find that 33.56% of our recommended questions have not been changed (100% fuzzy matching score) and 19.92% of them are slightly modified (81%-99% fuzzy matching score).",
"To dive into the characteristics and differences of these two question sources, we further conduct the human evaluation by sampling 200 conversation chunks randomly.",
"We select chunks that have QG questions unchanged (i.e., sampling from the 33.56% QG questions).",
"We ask three annotators to first write an answer to the given question and conversation, then label fluency (how fluent Source Question Generator Human Writer Questions 14,426 (41.7%) 20,178 (58.3%) Type 100 81-99 51-79 0-50 Ans.",
"and grammatically correct the question is, from 0 to 2), complexity (how hard to find an answer, from 0 to 2), and confidence (whether they are confident with their answer, 0 or 1).",
"More details of each evaluation dimension (Section A.4) and performance difference (Table 12) are shown in the Appendix.",
"The results in Table 4 indicate that QG questions are longer, more fluent, more complex, and crowd workers are less confident that they are providing the right answers.",
"This observation further con-firmed our hypothesis that the question generation strategy is effective to collect harder QA examples.",
"Following Rajpurkar et al. (2016), we used Part-Of-Speech (POS) (Kitaev and Klein, 2018) and Spacy NER taggers to study answers diversity.",
"Firstly, we use the NER tagger to assign an entity type to the answers.",
"However, since our answers are not necessary to be an entity, those answers without entity tags are then pass to the POS tagger, to extract the corresponding phrases tag.",
"In Table 5, we can see that Noun phrases make up 30.4% of the data; followed by People, Organization, Dates, other numeric, and Countries; and the remaining are made up of clauses and other types.",
"Full category distribution is shown in the Appendix (Figure 3).",
"Note that there are around 1% of answers in our dataset are coming from multiple source text spans (exam-ples are shown in Appendix Table 17).",
"The main difference between the two modes is whether the conversational chunk we used to collect QA pairs is provided or not.",
"In the chunk mode, our task is more like a traditional machine reading comprehension task that answers can be found (or cannot be found) in a short paragraph, usually less than 500 words.",
"In the full mode, on the other hand, we usually need an information retrieval stage before the QA stage.",
"For example, in the Natural Question dataset (Kwiatkowski et al., 2019), they split Wikipedia into millions of passages and retrieve the most relevant one to answer.",
"We define our full mode task with the following assumptions: 1) for the email and panel data, we assume to know which dialogue a question is corresponding to, that is, we only search chunks within the dialogue instead of all the possible conversations. This is simpler and more reasonable because each conversation is independent; 2) for slack data, we assume that we only know which channel a question belongs to but not the corresponding thread, so the retrieval part has to be done in the whole channel. Although chunk mode may be a better way to evaluate the ability of machine reading comprehension, the full mode is more practical as it is close to our setup in the real world.",
"There are two categories of question answering models: span-based extractive models which predict answers' start and end positions, and free-form text generation models which directly generate answers token by token.",
"All the state-of-the-art models are based on large-scale language models, which are first pretrained on the general text and then finetuned on other QA tasks.",
"We evaluate all of them on both zero-shot and finetuned settings (further finetuned on the QAConv training set), and both chunk mode and full mode with retrievers.",
"In addition, we run these models on the Molweni (Li et al., 2020) dataset for comparison and find out our baselines outperform the best-reported model, DADgraph (Li et al., 2021a) model, which used expensive discourse annotation on graph neural network.",
"We show the Molweni results in the Appendix (Table 11).",
"We use several models finetuned on the SQuAD 2.0 dataset as span extractive baselines.",
"We use uploaded models from huggingface (Wolf et al., 2019a) library.",
"DistilBERT (Sanh et al., 2019) is a knowledge-distilled version with 40% size reduction from the BERT model, and it is widely used in mobile devices.",
"The BERT-Base and RoBERTa-Base (Liu et al., 2019) models are evaluated as the most commonly used in the research community.",
"We also run the BERT-Large and RoBERTa-Large models as stronger baselines.",
"We use the whole-word masking version of BERT-Large instead of the token masking one from the original paper since it performs better.",
"We run several versions of UnifiedQA models (Khashabi et al., 2020) as strong generative QA baselines.",
"UnifiedQA is based on T5 model (Raf-fel et al., 2019), a language model that has been pretrained on 750GB C4 text corpus.",
"UnifiedQA further finetuned T5 models on eight existing QA corpora spanning four diverse formats, including extractive, abstractive, multiple-choice, and yes/no questions.",
"It has achieved state-of-the-art results on 10 factoid and commonsense QA datasets.",
"We finetune UnifiedQA on our datasets with T5-Base, T5-Large size, and T5-3B.",
"We report T5-11B size for the zero-shot performance.",
"Two retrieval baselines are investigated in this paper: BM25 and DPR-wiki (Karpukhin et al., 2020).",
"The BM25 retriever is a bag-of-words retrieval function weighted by term frequency and inverse document frequency.",
"The DPR-wiki model is a BERT-based dense retriever model trained for open-domain QA tasks, learning to retrieve the most relevant Wikipedia passage.",
"We train most of our experiments on 2 V100 NVIDIA GPUs with a batch size that maximizes their memory usage, except T5-3B we train on four A100 NVIDIA GPUs with batch size 1 with several parallel tricks, such as fp16, sharded_ddp and deepseep library.",
"We train 10 epochs for all T5 models and 5 epochs for all BERT-based models.",
"We release hyper-parameter setting and trained models to help reproduce baseline results.",
"We follow the standard evaluation metrics in the QA community: exact match (EM) and F1 scores.",
"The EM score is a strict score that predicted answers have to be the same as the ground truth 5394 Zero-Shot Finetune EM F1 FZ-R EM F1 FZ-R Human Performance* 79.99 89.87 92.33 --DistilBERT-Base-SQuAD2.0 40.04 46.90 59.62 57.28 68.88 75.39 BERT-Base-SQuAD2.0 36.22 44.57 57.72 58.84 71.02 77.03 BERT-Large-SQuAD2.0 53.54 62.58 71.11 64.93 76.65 81.27 RoBERTa-Base-SQuAD2.0 48.92 57.33 67.40 63.64 75.53 80.38 RoBERTa-Large-SQuAD2.0 50.78 59.73 69.11 67.80 78.80 83.10 T5-Base-UnifiedQA 51.95 65.48 73.26 64.98 76.52 81.69 T5-Large-UnifiedQA 58.81 71.67 77.72 66.76 78.67 83.21 T5-3B-UnifiedQA 59.93 73.07 78.89 67.41 79.41 83.64 T5-11B-UnifiedQA 44.96 61.52 68.68 -Table 6: Evaluation results: Chunk mode on the test set.",
"answers.",
"The F1 score is calculated by tokens overlapping between predicted answers and ground truth answers.",
"In addition, we also report the FZR scores, which used the Levenshtein distance to calculate the differences between sequences.",
"We follow Rajpurkar et al. (2016) to normalize the answers in several ways: remove stop-words, remove punctuation, and lowercase each character.",
"We add one step with the num2words and word2number libraries to avoid prediction difference such as 2 and two.",
"As the chunk mode results on the test set shown in Table 6, UnifiedQA T5 models, in general, outperform BERT/RoBERTa models in the zero-shot setting, and the performance increases as the size of the model increases.",
"This observation matches the recent trend that large-scale pretrained language model finetuned on aggregated datasets of a specific downstream task (e.g., QA tasks (Khashabi et al., 2020) or dialogue task (Wu et al., 2020)) can show state-of-the-art performance by knowledge transfer.",
"Due to the space limit, all the development set results are shown in the Appendix.",
"We observe a big improvement from all the baselines after finetuning on our training set, suggesting the effectiveness of our data to improve dialogue understanding.",
"Those span-based models, meanwhile, achieve similar performance to UnifiedQA T5 models with smaller model sizes.",
"BERT-Base model has the largest improvement gain by 22.6 EM score after finetuning.",
"We find that the UnifiedQA T5 model with 11B parameters cannot achieve performance as good as the 3B model, we guess that the released checkpoint has not been optimized well by Khashabi et al. (2020).",
"In addition, we estimate human performance by asking crowd workers to answer around 10% QA pairs in test set.",
"We collect two answers for each question and select one that has a higher FZ-R score.",
"We observe an EM score at around 80% and an F1 score at 90%, which still shows a considerable gap with existing 5395 Zero-Shot Finetune Ans.",
"The retriever results are shown in Table 8, in which we find that BM25 outperforms DPR-wiki by a large margin in our dataset on the recall@ k measure, where we report k = 1 , 3 , 5 , 10 .",
"The two possible reasons are that 1) the difference in data distribution between Wikipedia and conversation is large and DPR is not able to properly transfer to unseen documents, and 2) questions in QAConv are more specific to those mentioned entities, which makes the BM25 method more reliable.",
"We show the full mode results in Table 7 using BM25 (DPR-wiki results in the Appendix Table 16).",
"We use the top one retrieved conversational chunk as input to feed the trained QA models.",
"As a result, the performance of UnifiedQA (T5-3B) drops by 18.2% EM score in the zero-shot setting, and the finetuned results of RoBERTa-Large drop by 22.2% EM score as well, suggesting a serious error propagation issue in the full mode that requires further investigation in the future work.",
"We further check the results difference between answerable and unanswerable questions in Table 9.",
"The UnifiedQA T5 models outperform span-based models among the answerable questions, however, they are not able to answer any unanswerable questions and keep predicting some answers.",
"More interestingly, we observe that those span-based models perform poorly on answerable questions, as they can achieve a high recall but a low F1 score on unanswerable questions with a binary setting (predict answerable or unanswerable).",
"This implies that existing span-based models tend to predict our task as unanswerable, revealing their weakness of dialogue understanding ability.",
"Then we check what kinds of QA samples in the test set are improved the most while finetuning on our training data using RoBERTa-Large.",
"We find that 75% of such samples are incorrectly predicted to be unanswerable, which is consistent with the results in Table 9.",
"We also analyze the error prediction after finetuning.",
"We find that 35.5% are what-question errors, 18.2% are which-question errors, 12.1% are how-question errors, and 10.3% are who-question errors.",
"In addition, we sample 100 QA pairs from the errors which have an FZ-R score lower than 50% and manually check and categorize these predicted answers.",
"We find out that 20% of such examples are somehow reasonable and may be able to count as correct answers (e.g., UCLA v.s. University of California, Jay Sonneburg v.s. Jay), 31% are predicted wrong answers but with correct entity type (e.g., Eurasia v.s. China, Susan Flynn v.s. Sara Shackle-ton), 38% are wrong answers with different entity types (e.g., prison v.s. drug test, Thanksgiving v.s., fourth quarter), and 11% are classified as unanswerable questions wrongly.",
"This finding reveals the weakness of current evaluation metrics that they cannot measure semantic distances between two different answers.",
"QA datasets can be categorized into four groups.",
"The first one is cloze-style QA where a model has to fill in the blanks.",
"For example, the Children's Book Test (Hill et al., 2015) and the Who-did-What dataset (Onishi et al., 2016).",
"The second one is reading comprehension QA where a model picks the answers for multiple-choice questions or a yes/no question.",
"For examples, RACE (Lai et al., 2017) and DREAM (Sun et al., 2019) datasets.",
"The third one is span-based QA, such as SQuAD (Ra-5396 jpurkar et al., 2016) and MS MARCO (Nguyen et al., 2016) dataset, where a model extracts a text span from the given context as the answer.",
"The fourth one is open-domain QA, where the answers are selected and extracted from a large pool of passages, e.g., the WikiQA (Yang et al., 2015) and Natural Question (Kwiatkowski et al., 2019) datasets.",
"Conversation-related QA tasks have focused on asking sequential questions and answers like a conversation and are grounded on a short passage.",
"DoQA (Campos et al., 2020) is collected based on Stack Exchange, CoQA (Reddy et al., 2019) and QuAC (Choi et al., 2018) are the two most representative conversational QA datasets under this category.",
"CoQA contains conversational QA pairs, free-form answers along with text spans as rationales, and text passages from seven domains.",
"QuAC collected data by a teacher-student setting on Wikipedia sections and it could be open-ended, unanswerable, or context-specific questions.",
"Closest to our work, Dream (Sun et al., 2019) is a multiple-choice dialogue-based reading comprehension examination dataset, but the conversations are in daily chit-chat domains between two people.",
"FriendsQA (Yang and Choi, 2019) is compiled from transcripts of the TV show Friends, which is also chit-chat conversations among characters and only has around one thousand dialogues.",
"Molweni (Li et al., 2020) is built on top of Ubuntu corpus (Lowe et al., 2015) for machine-reading comprehension tasks, but its conversations are short and focused on one single domain, and their questions are less diverse due to their data collection strategy (10 annotators).",
"In general, our task is also related to conversations as a knowledge source.",
"The dialogue state tracking task in task-oriented dialogue systems can be viewed as one specific branch of this goal as well, where tracking slots and values can be reframed as a QA task (McCann et al., 2018; Li et al., 2021b).",
"Moreover, extracting user attributes from open-domain conversations (Wu et al., 2019), getting to know the user through conversations, can be marked as one of the potential applications.",
"The very recently proposed query-based meeting summarization dataset, QMSum (Zhong et al., 2021), can be viewed as one application of treating conversations as databases and conduct an abstractive question answering task.",
"QAConv is a new dataset that conducts QA on informative conversations such as emails, panels, and channels.",
"We show the unique challenges of our tasks in both chunk mode with oracle partial conversations and full mode with a retrieval stage.",
"We find that state-of-the-art QA models have limited dialogue understanding and tend to predict our answerable QA pairs as unanswerable.",
"We provide a new testbed for QA on conversation tasks to facilitate future research.",
"The QAConv benchmark proposed in this work could be helpful in creation of more powerful conversation retrieval and QA on conversations.",
"However, QAConv benchmark only covers a few domains as background conversations.",
"Furthermore, even with our best efforts to ensure high quality and accuracy, the dataset might still contain incorrect labels and biases in some instances, which could be the inherent mistakes from the original dialogue datasets.",
"This could pose a risk if models that are evaluated or built using this benchmark are used in domains not covered by the dataset or if they leverage evidence from unreliable or biased dialogues.",
"Thus, the proposed benchmark should not be treated as a universal tool for all domains and scenarios.",
"We have used only the publicly available transcripts data and adhere to their guideline, for example, the Media data is for research-purpose only and cannot be used for commercial purpose.",
"As conversations may have biased views, for example, specific political opinions from speakers, the transcripts and QA pairs will likely contain them.",
"The content of the transcripts and summaries only reflect the views of the speakers, not the au-thors' point-of-views.",
"We would like to remind our dataset users that there could have potential bias, toxicity, and subjective opinions in the selected conversations which may impact model training.",
"Please view the content and data usage with discretion."
] | [
"objective",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"method",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"objective",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Transformers are not suited for processing long documents, due to their quadratically increasing memory and time consumption.",
"Simply truncating a long document or applying the sparse attention mechanism will incur the context fragmentation problem or lead to an inferior modeling capability against comparable model sizes.",
"In this paper, we propose ERNIE-DOC , a document-level language pretraining model based on Recurrence Transformers (Dai et al., 2019).",
"Two well-designed techniques, namely the retrospective feed mechanism and the enhanced recurrence mechanism, enable ERNIE-DOC 1 , which has a much longer effective context length, to capture the contextual information of a complete document.",
"We pretrain ERNIE-DOC to explicitly learn the relationships among segments with an additional document-aware segment-reordering objective.",
"Various experiments were conducted on both English and Chinese document-level tasks.",
"ERNIE-DOC improved the state-of-the-art language modeling result of perplexity to 16.8 on WikiText-103.",
"Moreover, it outperformed competitive pretraining models by a large margin on most language understanding tasks, such as text classification and question answering.",
"Transformers (Vaswani et al., 2017) have achieved remarkable improvements in a wide range of natural language tasks, including language modeling (Dai et al., 2019), text classification (Yang et al., 2019), and question answering (Devlin et al., 2018; Radford et al., 2019).",
"This success is largely due to the self-attention mechanism, which enables the network to capture contextual information from the *indicates equal contribution.",
"1 Source code and pre-trained checkpoints can be found at https://github.com/PaddlePaddle/ERNIE/ tree/repro/ernie-doc .",
"entire input sequence.",
"Nevertheless, the memory usage and computation complexity caused by the self-attention mechanism grows quadratically with the sequence length, incurring excessive cost when processing a long document on existing hardware.",
"Currently, the most prominent pretrained models, such as BERT (Devlin et al., 2018), are used on fixed-length input segments of a maximum of 512 tokens owing to the aforementioned limitation.",
"Thus, a long document input must be partitioned into smaller segments of manageable sizes.",
"However, this leads to the loss of important cross-segment information, that is, the context fragmentation problem (Dai et al., 2019), as shown in Fig.",
"1(a).",
"To mitigate the problem of insufficient interactions among the partitioned segments of long documents, Recurrence Transformers (Dai et al., 2019; Rae et al., 2019) permit the use of contextual information from previous segments in computing the hidden states for a new segment by maintaining a memory component from the previous activation; this enables the modeling of long documents.",
"In addition, Sparse Attention Transformers (Child et al., 2019; Tay et al., 2020; Beltagy et al., 2020; Zaheer et al., 2020) focus on reducing the complexity of self-attention operations to explicitly improve the modeling length, but only up to a restricted context length (4,096) due to resource limitations.",
"We argue that existing strategies are not sufficiently effective or reliable, because the contextual information of a complete document is still not available for each segment during the training phase .",
"As depicted in Fig. 1, when training on segment S 2 , the model is ideally optimized by maximizing P ( y | ( S 1 , S 2 , S 3 )) conditioned on the contextual information of the entire document D = { S 1 , S 2 , S 3 } , in contrast to the following suboptimal solutions: P ( y | S 2 ) for Vanilla/Sparse Transformers 2 and P ( y | ( S 1 , S 2 )) for Recurrence Transformers.",
"To address this limitation, we propose ERNIE-DOC (A Retrospective Long-Document Modeling Transformer) based on the Recurrence Transformer paradigm.",
"Inspired by the human reading behavior of skimming a document first and then looking back upon it attentively, we design a retrospective feed mechanism in which segments from a document are fed twice as input.",
"As a result, each segment in the retrospective phase could explicitly fuse the semantic information of the entire document learned in the skimming phase, which prevents context fragmentation.",
"However, simply incorporating the retrospective feed mechanism into Recurrence Transformers is infeasible because the maximum effective context length is limited by the number of layers (Dai et al., 2019), as shown in Fig. 1",
"(b).",
"Thus, we present an enhanced recurrence mechanism , a drop-in replacement for a Recurrence Transformer, by changing the shifting-one-layer-downwards recurrence to the same-layer recurrence.",
"In this manner, the maximum effective context length can be expanded, and past higher-level representations can be exploited to enrich future lower-level representations.",
"Moreover, we introduce a segment-reordering objective to pretrain a document-level model.",
"Specifically, it is a document-aware task of predicting the correct order of the permuted set of segments of a document, to model the relationship among segments directly.",
"This allows ERNIE-2 For Sparse Transformers, the length of segment S 2 could be up to 4,096 in Beltagy et al. (2020); Zaheer et al. (2020).",
"DOC to build full document representations for prediction.",
"This is analogous to the sentence-reordering task in ERNIE 2.0 (Sun et al., 2020b) but at a segment level of granularity, spanning (commonly) multiple training steps.",
"We first evaluate ERNIE-DOC on autoregressive word-level language modeling using the enhanced recurrence mechanism, which, in theory, allows the model to process a document with in-finite words.",
"ERNIE-DOC achieves state-of-the-art (SOTA) results on the WiKiText-103 benchmark dataset, demonstrating its effectiveness in long-document modeling.",
"Then, to evaluate the potential of ERNIE-DOC on document-level natural language understanding (NLU) tasks, we pretrained the English ERNIE-DOC on the text corpora utilized in BigBird (Zaheer et al., 2020) from the RoBERTa-released checkpoint, and the Chinese ERNIE-DOC on the text corpora utilized in ERNIE 2.0 (Sun et al., 2020b) from scratch.",
"After pretraining, we fine-tuned ERNIE-DOC on a wide range of English and Chinese downstream tasks, including text classification, question answering and keypharse extraction.",
"Empirically, ERNIE-DOC consistently outperformed RoBERTa on various benchmarks and showed significant improvements over other high-performance long-text pretraining models for most tasks.",
"Sparse Attention Transformers have been extensively explored (Child et al., 2019; Tay et al., 2020; Beltagy et al., 2020; Zaheer et al., 2020).",
"The key idea is to sparsify the self-attention operation, which scales quadratically with the sequence length.",
"For instance, the Sparse Transformer (Child et al., 2019) uses a dilated sliding window that reduces the complexity to O ( L L ) , where L is the sequence length.",
"Reformer (Kitaev et al., 2020) further reduces the complexity to O ( L log L ) using locality-sensitive hashing attention to compute the nearest neighbors.",
"BP-Transformers (Ye et al., 2019) employs a binary partition for the input sequence.",
"Recently, Longformer (Beltagy et al., 2020) and BigBird (Zaheer et al., 2020) have been proposed, and both achieved state-of-the-art performance on a variety of long-document tasks.",
"They reduce the complexity of self-attention to O ( L ) by combining random attention, window attention, and global attention.",
"However, it has been proven in Zaheer et al. (2020) that sparse attention mechanisms cannot universally replace dense attention mechanisms; moreover, solving the simple problem of finding the furthest vector requires ( n ) -layers of a sparse attention mechanism but only O (1) layers of a dense attention mechanism.",
"In addition, the aforementioned methods require customized CUDA kernels or TVM programming to implement sparse attention, which are not maintainable and are difficult to use.",
"In this study, we adopt a different approach to adapting Recurrence Transformers for a pretraining-then-finetuning setting, to model a long document.",
"Recurrence Transformers (Dai et al., 2019; Rae et al., 2019) have been successfully applied in generative language modeling.",
"They employ the Transformer decoder as a parametric model for each conditional distribution in p ( x ) = (cid:81) Lt =1 p ( x t | x <t ) , where x denotes a text sequence.",
"To capture long dependencies, they process the text in segments from left to right based on the segment recurrence mechanism (Dai et al., 2019).",
"This mechanism maintains a memory bank of past activations at each layer to preserve a history of context.",
"Compressive Transformer (Rae et al., 2019) adds a compressive memory bank to sufficiently store old activations instead of discarding them, which facilitates long-range sequence learning.",
"However, these methods operate from left to right, which limits their capacity for discriminative language understanding tasks that require bidirectional information.",
"XLNet (Yang et al., 2019) proposed a permutation language modeling objective to construct bidirectional information and achieve supe-rior performance in multiple NLP tasks; however, its application to long-document modeling tasks remains largely unexplored.",
"ERNIE-DOC builds on the ideas of the Recurrence Transformers to 1) tackle the limitation of Recurrence Transformers for utilizing bidirectional contextual information and 2) improve the behavior of the segment recurrence mechanism to capture longer dependencies.",
"Hierarchical Transformers (Zhang et al., 2019; Lin et al., 2020) have enabled significant progress on numerous document-level tasks, such as document summarization (Zhang et al., 2019) and document ranking (Lin et al., 2020).",
"Similar to Vanilla Transformers, Hierarchical Transformers also split long documents into shorter segments with manageable lengths and then feed them independently to produce corresponding segment-level semantic representations.",
"Unlike in Vanilla Transformers, however, separate Transformer layers are used in Hierarchical Transformers to process the concatenation of these representations.",
"Hierarchical Transformers ignore the contextual information from the remaining segments when processing each segment of a long document, thus suffering from the context fragmentation problem.",
"In this section, we first describe the background (Sec. 3.1) that ERNIE-DOC builds on.",
"Then, we present the implementation of ERNIE-DOC , including the retrospective feed mechanism in Sec. 3.2, the enhanced recurrence mechanism in Sec. 3.3, and the segment-reordering objective in Sec. 3.4.",
"Formally, a long document D is sliced into T sequential segments, denoted as { S 1 , S 2 , ..., ST } , where S = { x , 1 , x , 2 , ..., x ,L } is the -th segment with L tokens; x denotes a single token.",
"Vanilla, Sparse, and Recurrence Transformers employ different strategies to produce the hidden state h n RL d for segment S at the n -th layer: (cid:101) h n 1 +1 = (cid:40) h n 1 +1 , Vanilla or Sparse Transformers [ SG ( h n 1 ) h n 1 +1 ] , Recurrence Transformers , q n +1 , k n +1 , v n +1 = h n 1 +1 W (cid:62) q , (cid:101) h n 1 +1 W (cid:62) k , (cid:101) h n 1 +1 W (cid:62) v .",
"h n +1 = Transformer-Block ( q n +1 , k n +1 , v n +1 ) .",
"(1) where q RL d , k , and v R ( L + m ) d are the query, key and value vectors, respectively with hidden dimension d and memory length m (Note that m = 0 for Vanilla or Sparse Transform-ers); (cid:101) h n 1 +1 R ( L + m ) d is the extended context; W R d d represents learnable linear projection parameters; the function SG ( ) denotes the stop-gradient operation; and the notation [ ] denotes the concatenation of two hidden states along the length dimension.",
"In contrast to Vanilla or Sparse Transformers, where h n +1 is produced using only itself, Recurrence Transformers introduce a segment-level recurrence mechanism to promote interaction across segments.",
"The hidden state computed for the previous segment h n 1 is cached as an auxiliary context to help process the current segment h n .",
"However, from the concatenation part in Eq.",
"1, i.e., [ SG ( h n 1 ) h n 1 +1 ] , there is apparently a constraint that the current hidden state can only fuse information from the previous segments.",
"In S 1 S 2 S 3 S 4 Recurrence Transformers S 1 S 2 S 3 S 4 Larger E ective Context Length S 1 S 2 S 3 S 4 ERNIE-DOC Transformer block Memory concatenation Hidden states input E ective Context Larger E ective Context Length Retrospective Phase Layer-1 Layer-2 Layer-3 Layer-1 Layer-2 Layer-3 The Retrospective Phase Figure 2: Illustrations of ERNIE-DOC and Recurrence Transformers, where models with three layers take as input a long document D which is sliced into four segments S i , i [1 , 2 , 3 , 4] .",
"ERNIE-DOC employs a retrospective feed mechanism to address the unavailability of the contextual information of a complete document for each segment .",
"The segments from a long document are twice fed as input.",
"Mimicking the human reading behavior, we refer to the first and second input-taking phases as the skimming and retrospective phases, respectively.",
"In the skimming phase, we employ a recurrence mechanism to cache the hidden states for each segment.",
"In the retrospective phase, we reuse the cached hidden states from the skimming phase to enable bi-directional information flow.",
"Naively, we can rewrite Eq.",
"1 to obtain the contextual information of an entire document in the skimming phase to be utilized in the retrospective phase as follows, (cid:98) H = [ (cid:98) H 11: T (cid:98) H 21: T (cid:98) HN 1: T ] , (skim. phase) (cid:101) h n 1 +1 = [ SG ( (cid:98) H h n 1 ) h n 1 +1 ] , (retro. phase) (2) where (cid:98) H R ( L T N ) d denotes the cached hidden states in the skimming phase with T segments, L length of each segment and total N layers, and (cid:98) H i 1: T = [ (cid:98) h i 1 (cid:98) h i 2 (cid:98) h iT ] is the concatenation of i -th layer's hidden states of the skimming phase.",
"Thus, the extended context (cid:101) h n 1 +1 is guaranteed to capture the bidirectional contextual information of the entire document.",
"However, it will incur massive memory and computation cost for directly employing (cid:98) H in self-attention mechanism.",
"Henceforth, the main issue is how (cid:98) H should be implemented in a memoryand computation-efficient manner.",
"By rethinking segment-level recurrence (Dai et al., 2019), we observe that the largest possible context dependency length increases linearly w.r.t the number of layers ( N ).",
"For instance, at i -th layer, (cid:98) h i have the longest dependency to (cid:98) h 1 ( i 1) .",
"Thus, to minimize memory and computation consumption, hidden states from the N -th layer (top-layer) are included at a stride of N , which is suf-ficient to build the contextual information of an entire document.",
"Formally, (cid:98) H can be reduced to (cid:98) H r = [ (cid:98) h NN (cid:98) h N 2 N (cid:98) h N (cid:98) T/N (cid:99) N ] (Note that when T is not evenly divisible by N , the last hidden state (cid:98) h NT need to be included).",
"However, for a long document input, the extra computational and memory cost of (cid:98) H r R (cid:100) T/N (cid:101) d where T (cid:29) N is still excessive on existing hardware.",
"To effectively utilize the retrospective feed mechanism in practice, an ideal strategy is to ensure that the cached hidden state h n 1 already contains the contextual information of an entire document without explicitly taking (cid:98) H or (cid:98) H r as input.",
"Essentially, we should tackle the problem of limited effective context length in the segment-level recurrence mechanisms.",
"Herein, we introduce the enhanced recurrence mechanism, a drop-in replacement for the segment-level recurrence mechanism, by changing the shifting-one-layer-downwards recurrence to the same-layer recurrence as follows: (cid:101) h n 1 +1 = [ SG ( h n ) h n 1 +1 ] (3) where the cached hidden state h n 1 in Eq.",
"1 and Eq.",
"2 is replaced with h n in Eq.",
"3. As shown in Fig. 2, when the retrospective feed mechanism is combined with the enhanced recurrence mechanism, every segment in the retrospective phase (shown in the box with a green dotted border) has bidirectional contextual information of the entire text input.",
"We successfully modeled a larger effective context length (shown in the box with a orange dotted border) than traditional Recurrence Transformers can without extra memory and computation costs.",
"Another benefit of the enhanced recurrence scheme is that past higher-level representations can be exploited to enrich future lower-level representations.",
"In addition to the masked language model (MLM) objective (Devlin et al., 2018), we introduce an additional document-aware task called segment-reordering objective for pretraining.",
"Benefitting from the much larger effective context length provided by the enhanced recurrence mechanism, the goal of the segment-reordering objective is to predict the correct order for the permuted set of segments of a long document, to explicitly learn the relationships among segments.",
"During the pretraining process of this task, a long text input D is first randomly partitioned into 1 to m chunks; then, all the combinations are shuffled in a random order.",
"As shown in Fig. 3, D is partitioned into three chunks and then permuted, that is, D = { C 1 , C 2 , C 3 } = D = { C 2 , C 3 , C 1 } , where C i denotes the i -th chunk.",
"Subsequently, the permuted long context D is split into T sequential segments as a common practice, denoted as D = { S 1 , S 2 , ..., ST } .",
"We let the pretrained model reorganize these permuted segments, modeled as a K -class classification problem, where K = (cid:80) mi =1 i !",
".",
"The pretraining objective is summarized as follows for the -th input segment: max log p ( S | S ) + 1 = T log p ( D| D ) where S is the corrupted version of S , which is obtained by randomly setting a portion of tokens S 1 S 2 ST Segments (~512 tokens each) (cid:51)(cid:72)(cid:85)(cid:80)(cid:88)(cid:68)(cid:87)(cid:72)(cid:71)(cid:3)(cid:38)(cid:75)(cid:88)(cid:81)(cid:78)(cid:86)(cid:3)(cid:82)(cid:73) (cid:68)(cid:3)(cid:47)(cid:82)(cid:81)(cid:74)(cid:3)(cid:55)(cid:72)(cid:91)(cid:87)(cid:3)(cid:44)(cid:81)(cid:83)(cid:88)(cid:87) Model label = C 1 C 2 C 3 C 2 : [Related Work] Sparse attention based transformers are largely explored C 3 : [Proposed Method] In this section, we firstly describe the background of proposed ERNIE-DOC C 1 : [Introduction] Transformers have achieved remarkable improvements (cid:335) (cid:335) Figure 3: Illustrations of segment-reordering objective.",
"to [MASK] ; D is the permutated version of D ; is the model parameter; and 1 = T indicates that the segment-reordering objective is optimized only at the T -th step.",
"Autoregressive language modeling aims to estimate the probability distribution of an existing to-ken/character based on previous tokens/characters in an input sequence.",
"For comparison with previous work, we conducted experiments on word-level LM, that is, WikiText-103 (Merity et al., 2016), which is a document-level language modeling dataset.",
"For autoregressive language modeling, we use a memory-enhanced Transformer-XL (Dai et al., 2019), that is, we employ our enhanced recurrence mechanism to replace the primitive one used in the Transformer-XL.",
"Additionally, as proposed by Segatron (Bai et al., 2020), we introduce the segment-aware mechanism into Transformer-XL.",
"Based on Transformer-XL, we trained a base-size model (L=16, H=410, A=10) and a large-size model (L=18, H=1,024, A=16) 3 .",
"The models were trained for 200K/400K steps using a batch size of 64/128 for the base/large configurations.",
"During the training phase, the sequence length and memory length were limited to 150 and 384 for the base and the large model, respectively.",
"The remaining hyper-parameters were identical to those of Transformer-XL.",
"3 We denote the number of Transformer layers as L, the hidden size as H, and the number of self-attention heads as A. Models #Param.",
"Tab.",
"1 summarizes the evaluation results for WikiText-103.",
"ERNIE-DOC achieves an impressive improvement compared with Transformer-XL: the perplexity (PPL) decreases by 3.0 for the base model and by 1.5 for the large model.",
"Finally, we improve the state-of-the-art result of PPL to 21.0 (the base model) and 16.8 (the large model).",
"English Data.",
"To allow ERNIE-DOC to capture long dependencies in pretraining, we compiled a corpus from four standard datasets: WIKIPEDIA , BOOKSCORPUS (Zhu et al., 2015), CC-NEWS 4 , and STORIES (Trinh and Le, 2018) (details listed in Tab. 2).",
"We tokenized the corpus using the RoBERTa wordpieces tokenizer (Liu et al., 2019) and duplicated the pretraining data 10 times.",
"Chinese Data.",
"The Chinese text corpora used in ERNIE 2.0 (Sun et al., 2020b) were adopted for pretraining ERNIE-DOC .",
"Pretraining.",
"We trained three sizes of models for English tasks: small (L=6, H=256, A=4), base (L=12, H=768, A=12), and large (L=24, H=1,024, 4 We used news-please to crawl English news articles published between September 2016 and February 2019 and adopted Message Digest Algorithm5 (MD5) for deduplication.",
"A=16).",
"For Chinese tasks, we used only one size, i.e., base (L=12, H=768, A=12).",
"We limited the length of the sentences in each mini-batch to 512 tokens and the length of the memory to 128.",
"The models were trained for 500K/400K/100K steps using a batch size of 2,560/2,560/3,920 sentences for the small/base/large configurations.",
"ERNIE-DOC was optimized with the Adam (Kingma and Ba, 2014) optimizer.",
"The learning rate was warmed up over the first 4,000 steps to a peak value of 1e-4, and then it linearly decayed.",
"The remaining pretraining hyperparameters were the same as those of RoBERTa (Liu et al., 2019) (see Tab. 12).",
"Additionally, we employed relative positional embedding (Shaw et al., 2018) in our model pretraining because it is necessary for reusing hidden state without causing temporal confusion (Dai et al., 2019).",
"Finetune.",
"In contrast to previous models, such as BERT, RoBERTa, and XLNet, the proposed model employs the retrospective feed mechanism and the enhanced recurrence mechanism during the finetuning phase to fully utilize the advantages of these two strategies.",
"Results on Long-Text Classification Tasks .",
"We consider two datasets: IMDB reviews (Maas et al., 2011) and Hyperpartisan News Detection (HYP) (Kiesel et al., 2019).",
"The former is a widely used sentiment analysis dataset containing 50,000 movie reviews, labeled as positive or negative.",
"The latter contains news that takes extreme left-wing or right-wing standpoints.",
"The documents in HYP are extremely long (50% of the samples contain more than 537 tokens) and are thus suitable for testing long-text classification ability.",
"Tab.",
"3 summarizes the results of the ERNIE-DOC -Base and ERNIE-DOC -Large models for long-text classification tasks, and ERNIE-DOC achieves a SOTA result.",
"On IMDB, we observed a modest perfor-Models TQA HQA F1 Span Supp Joint RoBERTa 74.3 73.5 83.4 63.5 Longformer 75.2 74.3 84.4 64.4 BigBird 79.5 75.5 87.1 67.8 ERNIE-DOC 80.1 79.4 86.3 70.5 Longformer-Large 77.8 81.0 85.8 71.4 BigBird-Large -81.3 89.4 -ERNIE-DOC -Large 82.5 82.2 87.6 73.7 Table 4: Results on TQA and HQA dev dataset for document-level QA.",
"mance gain compared with RoBERTa.",
"This is because nearly 90% of the samples in the dataset consist of fewer than 569 tokens.",
"Unlike on IMDB, ERNIE-DOC surpasses the baseline models on HYP by a substantial margin, demonstrating its capability of utilizing information from a long document input.",
"Note that we include XLNet-Large, the previous SOTA pretraining model on the IMDB dataset, as the baseline for a large model setting; ERNIE-DOC achieves a result comparable to that of XLNet-Large.",
"Results on Document-level Question-Answering Tasks .",
"We utilized two document-level QA datasets (Wikipedia setting of TriviaQA (TQA) (Joshi et al., 2017) and distractor setting of HotpotQA (HQA) (Yang et al., 2018)) to evaluate the reasoning ability of the models over long documents.",
"TQA and HQA are extractive QA tasks, and we follow the simple QA model of BERT (Devlin et al., 2018) to predict an answer with the maximum sum of start and end logits across multiple segments of a sample.",
"In addition, we use a modified cross-entropy loss (Clark and Gardner, 2017) for the TQA dataset and use a two-stage model (Groeneveld et al., 2020) with the backbone of ERNIE-DOC for the HQA dataset.",
"Tab.",
"4. shows that ERNIE-DOC outperforms RoBERTa and Longformer by a considerable margin on these two datasets, and is comparable to current SOTA long-document model, i.e., BigBird on HQA in large-size model setting.",
"Results on the Keyphrase Extraction Task .",
"We include OpenKP (Xiong et al., 2019) dataset to evaluate ERNIE-DOC 's ability to extract keyphrases from a long document.",
"Each document contains up to three short keyphrases and we follow the model setting of JointKPE (Sun et al., 2020a) and ETC (Ainslie et al., 2020) by applying CNNs on BERT's output to compose n-gram embeddings for classification.",
"We report the results of base-size models in Tab.",
"5 under no-visual-features setting for easy and fair comparison with baselines.",
"ERNIE-DOC performs stably better on all metrics on the OpenKP dataset.",
"We conducted extensive experiments on seven Chinese natural language understanding (NLU) tasks, including machine reading comprehension (CMRC2018 (Cui et al., 2018), DRCD (Shao et al., 2018), DuReader (He et al., 2017), C 3 (Sun et al., 2019a)), semantic similarity (CAIL2019-SCM (CAIL) (Xiao et al., 2019)), and long-text classification (IFLYTEK (IFK) (Xu et al., 2020), THUCNews (THU) 5 (Sun et al., 2016)).",
"The documents in all the aforementioned datasets are sufficiently long to be used to evaluate the effectiveness of ERNIE-DOC on long-context tasks (see detailed datasets statistics in Tab. 9).",
"We reported the mean results with five runs for the seven Chinese tasks in Tab.",
"6, and summarized the hyperparameters in Tab.",
"16.",
"ERNIE-DOC outperforms previous models across these Chinese NLU tasks by a significant margin in the base-size model group.",
"4.2.5 Ablation Studies No.",
"No.IV and No.V, we see that segment-level recurrence is necessary for modeling long documents and produces 2.74 and 3.95 % points improvement on the TQA and HYP dateset, respectively.",
"Moreover, a substantial improvement is achieved using the enhance recurrence mechanism (2.29% point on TQA and 1.40% point on HYP, see No.III IV).",
"Retrospective feed mechanism further improves 0.21% point on TQA and 1.33% point on HYP (No.II No.III).",
"Considering different types of tasks, we observe that on HYP, an extremely long text classification dataset, a substantial improvement is achieved using the segment-reordering objective (1.5% point).",
"This indicates that the [CLS] token, pretrained using the segment-reordering objective, is more adaptable to the document-level text classification task.",
"Effect of enhanced recurrence mechanism with regard to different maximum sequence lengths .",
"As depicted in Fig. 4, the enhanced recurrence mechanism plays an important role in pretraining an effective language model with lower PPL and higher accuracy under both the maximum sequence input lengths of 128 and 512.",
"The effect of the enhanced recurrence mechanism is more significant under a smaller maximum sequence length, even makes the ERNIE-DOC -Small (max-len:128) comparable to ERNIE-DOC -Small w/o en recur (max-len:512) w.r.t accuracy.",
"This intriguing property of the enhanced recurrence mechanism enables more efficient model training and inference by reducing maximum sequence length while remaining comparable modeling capability.",
"In this paper, we proposed ERNIE-DOC , a document-level language pretraining model based on the Recurrence Transformers paradigm.",
"Two well-designed mechanisms, namely the retrospective feed mechanism and the enhanced recurrent mechanism, enable ERNIE-DOC , which theoretically has the longest possible dependency, to model bidirectional contextual information of a complete document.",
"Additionally, ERNIE-DOC is pretrained with a document-aware segment-reordering objective to explicitly learn the relationship among segments of a long context.",
"Experiments on various downstream tasks demonstrate that ERNIE-DOC outperforms existing strong pretraining models such as RoBERTa, Longformer, and BigBird and achieves SOTA results on several language modeling and language understanding benchmarks.",
"In future studies, we will evaluate ERNIE-DOC on language generation tasks, such as generative question answering and text summarization.",
"We will also investigate its potential applicability in other areas, such as computational biology.",
"Another possibility is to incorporate graph neural networks into ERNIE-DOC to enhance its modeling capability for tasks that require multi-hop reasoning and long-document modeling ability.",
"This work was supported by the National Key Research and Development Project of China (No. 2018AAA0101900)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"other"
] |
[
"Molecular representation learning plays an essential role in cheminformatics.",
"Recently, language model-based approaches have gained popularity as an alternative to traditional expert-designed features to encode molecules.",
"However, these approaches only utilize a single molecular language for representation learning.",
"Motivated by the fact that a given molecule can be described using different languages such as Simplified Molecular Line Entry System (SMILES), the International Union of Pure and Applied Chemistry (IUPAC), and the IUPAC International Chemical Identifier (InChI), we propose a multilingual molecular embedding generation approach called MM-Deacon ( m ultilingual m olecular d omain e mbedding a nalysis via con trastive learning).",
"MM-Deacon is pre-trained using SMILES and IUPAC as two different languages on large-scale molecules.",
"We evaluated the robustness of our method on seven molecular property prediction tasks from MoleculeNet benchmark, zero-shot cross-lingual retrieval, and a drug-drug interaction prediction task.",
"Drug discovery process involves screening of millions of compounds in the early stages of drug design, which is time consuming and expensive.",
"Computer-aided drug discovery can reduce the time and cost involved in this process via automating various cheminformatics tasks (Kontogeorgis and Gani, 2004; Xu et al., 2017; Winter et al., 2019).",
"Traditional methods to encode molecules such as fingerprint generation rely heavily on molecular fragment-level operations on top of molecule graph constructed by molecular atoms and bonds (Burden, 1989; Bender and Glen, 2004; Vogt and Bajorath, 2008; Muegge and Mukherjee, 2016).",
"An example of such methods is Morgan fingerprint, also known as Extended-Connectivity Fingerprint (ECFP) (Morgan, 1965; Rogers and Hahn, 2010), where a fixed binary hash function is applied on each atom and its neighborhood.",
"These kinds of approaches focus on local features, hence they may not capture global information.",
"In addition to molecule graph, a given molecule can also be described with different languages such as Simplified Molecular Line Entry System (SMILES), the International Union of Pure and Applied Chemistry (IUPAC), and the IUPAC International Chemical Identifier (InChI).",
"Particularly, SMILES is widely used to represent molecule structures as ASCII strings (Weininger, 1988; Favre and Powell, 2013) at an atom and bond level.",
"IUPAC nomenclature, on the other hand, serves the purpose of systematically naming organic compounds by basic words that indicate the structure of the compound and prioritize on functional groups to facilitate communication (Panico et al., 1993).",
"Fig. 1 shows a comparison of SMILES and IUPAC characteristics for the same molecule.",
"The SMILES string is created by traversing the molecule graph, where each letter in the SMILES string (such as C, F, N, O in Fig. 1) corresponds to an atom on the graph, and other characters represent positions and connectivity.",
"However, IUPAC names are akin to a natural language, and morphemes in the IUPAC name (like fluoro, prop, en, yl in this example) often represent specific types of substructure on the molecule graph, which are also responsible for characteristic chemical reactions of molecules.",
"Advances in natural language processing (NLP) have been very promising for molecule embedding generation and molecular property prediction (Xu et al., 2017; Gmez-Bombarelli et al., 2018; Samanta et al., 2020; Koge et al., 2021; Honda et al., 2019; Shrivastava and Kell, 2021; Goh et al., 2017; Schwaller et al., 2019; Payne et al., 2020; Aumentado-Armstrong, 2018).",
"It is important to note that all of the methods mentioned above work with SMILES representation only.",
"Therefore, the underlying chemical knowledge encoded in the em-3441 Figure 1: Schematic illustration of differences in SMILES and IUPAC characteristics for the same molecule.",
"bedding is restricted to a single language modality.",
"Transformer models trained with self-supervised masked language modeling (MLM) loss (Vaswani et al., 2017) in chemical domain (Wang et al., 2019; Chithrananda et al., 2020; Elnaggar et al., 2020; Rong et al., 2020; Schwaller et al., 2021; Bagal et al., 2021) have also been used for molecular representation learning.",
"However, pre-training objectives like MLM loss tend to impose task-specific bias on the final layers of Transformers (Carlsson et al., 2020), limiting the generalization of the embeddings.",
"In recent years, contrastive learning has been successful in multimodal vision and language research (Radford et al., 2021; Meyer et al., 2020; Shi et al., 2020; Cui et al., 2020; Chen et al., 2021; Alayrac et al., 2020; Akbari et al., 2021; Lee et al., 2020; Liu et al., 2020).",
"Radford et al. (2021) used image-text pairs to learn scalable visual representations.",
"Carlsson et al. (2020) showed the superiority of contrastive objectives in acquiring global (not fragment-level) semantic representations.",
"In light of these advances, we propose MM-Deacon ( m ultilingual m olecular d omain e mbedding a nalysis via con trastive learning), a molecular representation learning algorithm built on SMILES and IUPAC joint training.",
"Transformers are used as base encoders in MM-Deacon to encode SMILES and IUPAC, and embeddings from encoders are projected to a joint embedding space.",
"Afterwards, a contrastive objective is used to push the embeddings of positive cross-lingual pairs (SMILES and IUPAC for the same molecule) closer together and the embeddings of negative cross-lingual pairs (SMILES and IUPAC for different molecules) farther apart.",
"Here instead of using SMILES and IUPAC for sequence-to-sequence translation (Rajan et al., 2021; Krasnov et al., 2021; Handsel et al., 2021), we obtain positive and negative SMILES-IUPAC pairs and contrast their embeddings at the global molecule level rather than the fragment level.",
"Different molecule descriptors are thus integrated into the same joint embedding space, with mutual information maximized across distinct molecule languages.",
"We pre-train MM-Deacon on 10 million molecules chosen at random from the publicly available PubChem dataset (Kim et al., 2016) and then use the pre-trained model for downstream tasks.",
"Our main contributions are as follows: We propose MM-Deacon, a novel approach for utilizing multiple molecular languages to generate molecule embeddings via contrastive learning.",
"To the best of our knowledge, we are the first to leverage mutual information shared across SMILES and IUPAC for molecule encoding.",
"We conduct extensive experiments on a variety of tasks, including molecular property prediction, cross-lingual molecule retrieval, and drug-drug interaction (DDI) prediction, and demonstrate that our approach outperforms baseline methods and existing state-of-the-art approaches.",
"Deep learning tasks commonly face two challenges: first, dataset size is often limited, and second, annotations are scarce and expensive.",
"A pre-training scheme can benefit downstream tasks by leveraging large-scale unlabeled or weakly labeled data.",
"Such pre-training and fine-tuning frameworks have recently sparked much interest in the molecular domain (Hu et al., 2019; Samanta et al., 2020; Chithrananda et al., 2020; Rong et al., 2020; Shrivastava and Kell, 2021; Xue et al., 2021; Zhu et al., 2021; Wang et al., 2021; Liu et al., 2021).",
"Existing pre-training methods can be divided into three categories based on the models used: pre-training with graph neural networks (GNNs), pre-training with language models, and pre-training with hybrid models.",
"nodes and bonds as edges.",
"Hu et al. (2019) pretrained GNNs on 2 million molecules using both node-level and graph-level representations with attribute masking and structure prediction objectives.",
"MolCLR (Wang et al., 2021) used subgraph-level molecule data augmentation scheme to create positive and negative pairs and contrastive learning to distinguish positive from negative.",
"GraphMVP (Liu et al., 2021) was pre-trained on the consistency of 2D and 3D molecule graphs (3D graphs formed by adding atom spatial positions to 2D graphs) and contrastive objectives with GNNs.",
"Pre-training with language models.",
"Language models are widely used to encode SMILES for molecular representation learning.",
"Xu et al. (2017) reconstructed SMILES using encoder-decoder gated recurrent units (GRUs) with seq2seq loss, where embeddings in the latent space were used for downstream molecular property prediction.",
"Chemberta (Chithrananda et al., 2020) fed SMILES into Transformers, which were then optimized by MLM loss.",
"FragNet (Shrivastava and Kell, 2021) used encoder-decoder Transformers to reconstruct SMILES and enforced extra supervision to the latent space with augmented SMILES and contrastive learning.",
"X-Mol (Xue et al., 2021) was pretrained by taking as input a pair of SMILES variants for the same molecule and generating one of the two input SMILES as output with Transformers on 1.1 billion molecules.",
"Pre-training with hybrid models.",
"Different molecule data formats can be used collaboratively to enforce cross-modality alignment, resulting in the use of hybrid models.",
"For example, DMP (Zhu et al., 2021) was built on the consistency of SMILES and 2D molecule graphs, with SMILES encoded by Transformers and 2D molecule graphs encoded by GNNs.",
"Unlike other molecule pre-training methods, MM-Deacon is multilingually pre-trained with language models using pairwise SMILES and IUPAC.",
"Compared with using molecule graphs with GNNs, IUPAC names encoded by language models bring in a rich amount of prior knowledge by basic words representing functional groups, without the need for sophisticated graph hyperparameter design.",
"MM-Deacon is a deep neural network designed for SMILES-IUPAC joint learning with the goal of contrasting positive SMILES-IUPAC pairs from nega-Figure",
"nega-Figure 2: Schematic diagram for MM-Deacon pretraining.",
"SMILES and IUPAC are encoded by separate Transformers.",
"Embeddings from encoders are average-pooled globally and projected to a joint chemical embedding space, where contrastive objectives are used to maximize mutual information for SMILES and IUPAC from the same molecule and distinguish SMILES and IUPAC from different molecules.",
"tive pairs and thus maximizing mutual information across different molecule languages.",
"SMILES and IUPAC for the same molecule are regarded as positive pairs, while SMILES and IUPAC for different molecules are considered negative.",
"Transformer encoders with multi-head self-attention layers are utilized to encode SMILES and IUPAC strings.",
"Embeddings from the encoders are pooled globally and projected to the joint chemical embedding space.",
"MM-Deacon is pre-trained on a dataset of 10 million molecules chosen at random from PubChem.",
"We use a Byte-Pair Encoding (BPE) tokenizer for SMILES tokenization, as is shown by Chithrananda et al. (2020) that BPE performed better than regex-based tokenization for SMILES on downstream tasks.",
"For IUPAC name tokenization, a rule-based regex (Krasnov et al., 2021) that splits IUPAC strings based on suffixes, prefixes, trivial names, and so on is employed.",
"The input sequence length statistics as well as the top 20 most frequent tokens in the SMILES and IUPAC corpora are displayed in Figs.",
"9 and 10 (Appendix A).",
"As illustrated in Fig. 2, MM-Deacon takes SMILES and IUPAC strings as the input to separate branches.",
"The input text string s is tokenized and embedded into a numeric matrix representation x within each branch, and the order of the token list is preserved by a positional embedding p x .",
"Then x and p x are ingested by an encoder block that consists of 6 layers of Transformer encoder.",
"A Trans-3443 former encoder has two sub-layers, a multi-head attention layer and a fully-connected feed-forward layer.",
"Each sub-layer is followed by a residual connection and layer normalization to normalize input values for all neurons in the same layer (Vaswani et al., 2017; Ba et al., 2016).",
"The multi-head attention layer acquires long-dependency information by taking all positions into consideration.",
"We then use a global average pooling layer to integrate features at all positions and a projection layer to project the integrated feature vector to the joint embedding space.",
"Thus the final embedding z of x can be expressed as, z ( x ) = ( ( ( x + p x ))) .",
"The maximum input token sequence length is set to 512.",
"For each of the 6 Transformer encoder layers, we choose the number of self-attention heads as 12 and hidden size of 768.",
"The projection layer projects the vector from length of 768 to 512 to make the representation more compact.",
"Thus z ( x ) R 512 .",
"Our goal is to align pairs of language modalities in the joint embedding space by maximizing mutual information of positive pairs and distinguishing them from negative pairs.",
"For this purpose, we use InfoNCE (Oord et al., 2018; Alayrac et al., 2020; Radford et al., 2021) as the contrastive loss.",
"We do not construct negative pairs manually.",
"Instead, during training, we obtain negative pairs in mini-batches.",
"Using a minibatch of NSMILES-IUPAC pairs from N molecules as input, N positive pairs and N 2 N negative pairs can be generated within the correlation matrix of NSMILES strings and N IUPAC strings.",
"More specifically, the only positive pair for i -th SMILES is i -th IUPAC, while the remaining N 1 IUPAC strings form negative pairs with i -th SMILES.",
"Therefore, the InfoNCE loss for i -th SMILES is, L sli = log ( exp ( sim ( z sli , z ipi ) / ) (cid:80) Nj =1 exp ( sim ( z sli , z ipj ) / )) , (2) where sl and ip represent SMILES and IUPAC respectively.",
"sim () is the pairwise similarity function that employs cosine similarity in this work.",
"is the temperature.",
"Likewise, the loss function for i -th IUPAC is, Figure 3: Possible scenarios in the downstream stage.",
"L ip i = log ( exp ( sim ( z sli , z ipi ) / ) (cid:80) Nj =1 exp ( sim ( z sl j , z ip i ) / )) .",
"(3) As a result, the final loss function is as follows, L = 1 2 N (cid:88) t { sl,ip } N (cid:88) i =1 L ti .",
"(4) We pre-train MM-Deacon on 80 V100 GPUs for 10 epochs (15 hours in total) with a 16 batch size on each GPU using AdamW optimizer with a learning rate of 10 6 .",
"The temperature is set as 0.07 as in (Oord et al., 2018).",
"Knowledge gained during pre-training can be transferred to downstream tasks in different ways.",
"Fig. 3 lists two situations that make use of pre-trained MM-Deacon in the downstream stage.",
"MM-Deacon fine-tuning: A task-specific clas-sification/regression head can be attached to pretrained MM-Deacon and the system as a whole can be tuned on downstream task datasets.",
"MM-Deacon fingerprint: Pre-trained MM-Deacon is frozen.",
"An input molecule is embedded as MM-Deacon fingerprint for zero-shot explorations (such as clustering analysis and similarity retrieval) and supervised tasks with the help of an extra classifier.",
"MM-Deacon was evaluated on seven molecular property prediction tasks from MoleculeNet bench-3444",
"mark (Wu et al., 2018), zero-shot cross-lingual retrieval, and a drug-drug interaction (DDI) prediction task.",
"MoleculeNet benchmark provides a unified framework for evaluating and comparing molecular machine learning methods on a variety of molecular property prediction tasks ranging from molecular quantum mechanics to physiological themes, and is widely acknowledged as the standard in the research community (Hu et al., 2019; Chithrananda et al., 2020; Xue et al., 2021; Zhu et al., 2021; Wang et al., 2021; Liu et al., 2021).",
"Four classification datasets and three regression datasets from the MoleculeNet benchmark were utilized to evaluate our approach.",
"Data.",
"The blood-brain barrier penetration (BBBP), clinical trail toxicity (ClinTox), HIV replication inhibition (HIV), and side effect resource (SIDER) datasets are classification tasks in which molecule SMILES strings and their binary labels are provided in each task.",
"Area Under Curve of the Receiver Operating Characteristic curve (ROC-AUC) is the performance metric in which the higher the value, the better the performance.",
"For datasets with multiple tasks like SIDER, the averaged ROC-AUC across all tasks under the same dataset is reported.",
"The fractions of train/val/test sets for each classification task are 0.8/0.1/0.1 with Scaffold split.",
"Note that data split using molecule scaffolds (two-dimensional structural frameworks) results in more structurally distinct train/val/test sets, making it more challenging than random split (Wu et al., 2018).",
"The water solubility data (ESOL), free solvation (FreeSolv), and experimental results of octabol/water distribution coefficient (Lipophilic-ity) datasets are all regression tasks to predict numeric labels given molecule SMILES strings.",
"Root Mean Square Error (RMSE) is used as the evaluation metric in which the lower the value, the better the performance.",
"As recommended by MoleculeNet, random split that divides each dataset into 0.8/0.1/0.1 for train/val/test sets is employed.",
"The results on validation set are used to select the best model.",
"To maintain consistency with MoleculeNet, we ran each task three times, each time with a different data split seed, to obtain the mean and standard deviation (std) of the metric.",
"Details of each dataset such as the number of tasks and molecules it contains are displayed in Table 1.",
"Model.",
"We utilized the model shown in Fig.",
"3(a) in which a linear layer serving as the task-specific head was added to pre-trained MM-Deacon SMILES branch for fine-tuning (IUPAC branch was removed).",
"Cross-entropy loss was employed for classification tasks and MSE loss was employed for regression tasks.",
"Hyperparameter tuning was performed using grid search with possible choices listed in Table 5 (Appendix B).",
"Each task was optimized individually.",
"Results.",
"Table 2 shows the mean and std results for each dataset.",
"The first half of the table displays results imported from MoleculeNet (Wu et al., 2018), while the second section shows the results from MM-Deacon and other state-of-the-art molecular pre-training and fine-tuning approaches.",
"MLM[CLS] denotes our implementation of a Chemberta (Chithrananda et al., 2020) variant that uses the same Transformer settings as MM-Deacon SMILES branch, pre-trained with MLM loss on 10M molecules, and fine-tuned through [CLS] token with the same downstream setting as MM-Deacon.",
"MM-Deacon exceeds the performance of traditional machine learning methods like random forest (RF) and task-specific GNNs reported in MoleculeNet work by a significant margin for most of the tasks.",
"When compared to other pre-training based approaches, MM-Deacon outperforms the existing state-of-the-art approaches in four of the seven datasets and is comparable in the remaining three, with major improvements on ClinTox and FreeSolv.",
"All pre-training based methods were pre-trained on millions of molecules, with the exception of GraphMVP, which was pre-trained on 50K molecules.",
"The requirement that molecules have both 2D and 3D structure information available at 3445 Method BBBP ClinTox HIV SIDER ESOL FreeSolv Lipophilicity RF 71.4 0.0 71.3 5.6 78.1 0.6 68.4 0.9 1.07 0.19 2.03 0.22 0.876 0.040 KernelSVM 72.9 0.0 66.9 9.2 79.2 0.0 68.2 1.3 -Multitask 68.8 0.5 77.8 5.5 69.8 3.7 66.6 2.6 1.12 0.15 1.87 0.07 0.859 0.013 GC 69.0 0.9 80.7 4.7 76.3 1.6 63.8 1.2 0.97 0.01 1.40 0.16 0.655 0.036 Weave 67.1 1.4 83.2 3.7 70.3 3.9 58.1 2.7 0.61 0.07 1.22 0.28 0.715 0.035 MPNN --0.58 0.03 1.15 0.12 0.719 0.031 Hu et al. (2019) 70.8 1.5 78.9 2.4 80.2 0.9 65.2 0.9 -MolCLR (Wang et al., 2021) 73.6 0.5 93.2 1.7 80.6 1.1 68.0 1.1 -DMP (Zhu et al., 2021) 78.1 0.5 95.0 0.5 81.0 0.7 69.2 0.7 --X-Mol (Xue et al., 2021) 96.2 N/A 98.4 N/A 79.8 N/A -0.578 N/A 1.108 N/A 0.596 N/A GraphMVP (Liu et al., 2021) 72.4 1.6 77.5 4.2 77.0 1.2 63.9 1.2 1.029 N/A -0.681 N/A MLM[CLS] 70.6 4.5 93.2 0.1 77.9 0.2 64.8 1.3 0.640 0.023 1.21 0.046 0.804 0.037 MM-Deacon 78.5 0.4 99.5 0.3 80.1 0.5 69.3 0.5 0.565 0.014 0.926 0.013 0.650 0.021 Table 2: Results in terms of mean and std for each dataset included from MoleculeNet benchmark.",
"the same time to be qualified has limited the scala-bility of GraphMVP.",
"MM-Deacon and MLM-CLS both used 6 layers of Transformer blocks to process SMILES.",
"For each task, MM-Deacon, which was pre-trained with both SMILES and IUPAC, outscored MLM-CLS , which was pre-trained with SMILES only.",
"MM-Deacon and DMP performed comparably on the four classification tasks, while DMP used 12 layers of Transformer blocks for SMILES and a 12-layer GNN to encode a molecule 2D graph, which is nearly twice the size of MM-Deacon model.",
"Moreover, we found that BBBP test set is significantly more challenging than the validation set, which is consistent with the results published in the MoleculeNet paper (Wu et al., 2018).",
"The substantially high accuracy X-Mol achieved on the BBBP dataset could be due to either the 1.1 billion molecules they utilized for pre-training or a different dataset division approach they employed.",
"In addition to conducting fine-tuning on supervised tasks like molecular property prediction, pretrained MM-Deacon can be employed directly in large-scale zero-shot analysis.",
"Zero-shot cross-lingual retrieval operates on top of MM-Deacon fingerprint generated by pre-trained MM-Deacon given molecule SMILES or IUPAC as input.",
"This task enables the retrieval of similar molecules across languages without the need for translation, and it can also be used to evaluate the learned agreement in the joint embedding space between SMILES and IUPAC representations.",
"Data.",
"100K molecules were randomly chosen from PubChem dataset after excluding the 10 million molecules used for MM-Deacon pre-training.",
"SMILES and IUPAC strings are provided for each molecule.",
"We used average recall at K (R@1 and R@5) to measure the percentage of the ground truth that appears in the top K retrieved molecules.",
"Model.",
"Pre-trained MM-Deacon was used for MM-Deacon fingerprint generation, as shown in Fig.",
"3(b).",
"As a result, each SMILES and IUPAC string was encoded as MM-Deacon SMILES fingerprint and IUPAC fingerprint respectively.",
"Cosine similarity between a query and molecules in the search candidates was used to determine the ranking.",
"Results.",
"Fig. 4 shows the outcomes of SMILES-to-IUPAC and IUPAC-to-SMILES retrieval in terms of recall.",
"We not only performed retrieval directly on the entire 100K molecules, but also reported the results on smaller groups of molecules (100, 10K) to get a more thorough picture of the retrieval performance.",
"MM-Deacon gets a R@5 above 85% for both types of cross-lingual retrieval even while executing retrieval on 100K molecules.",
"Moreover, Figs.",
"5 and 6 show an example of SMILES-to-IUPAC retrieval and an example of IUPAC-to-SMILES retrieval respectively.",
"Additional retrieval examples for scenarios where the performance is difficult to be quantified, such as retrieval queried by a free combination of 3446 Figure 4: Average recall for cross-lingual retrieval on groups of molecules with different sizes.",
"The effectiveness of combining MM-Deacon fingerprints with a task-specific classifier for supervised learning was tested on a DDI prediction task.",
"The objective of this task is to predict whether or not Table 3: DDI prediction results using 5-fold cross-validation.",
"Data.",
"The DDI dataset (Zhang et al., 2017) used here includes 548 drugs, with 48,584 known interactions, and 101,294 non-interactions (may contain undiscovered interactions at the time the dataset was created).",
"We obtained the SMILES and IUPAC names for each drug from PubChem.",
"Stratified 5-fold cross-validation with drug combination split was utilized.",
"The evaluation metrics are Area Under the ROC Curve (AUC), Area Under the Precision-Recall Curve (AUPR), precision, and recall, with AUPR serving as the primary metric (Zhang et al., 2017).",
"Model.",
"MM-Deacon fingerprints of paired drugs are concatenated and fed into a multi-layer percep-tron (MLP) network implemented by scikit-learn (Pedregosa et al., 2011) for binary classification.",
"Three different types of fingerprints are used for MM-Deacon: SMILES, IUPAC, and concatenated SMILES and IUPAC fingerprints.",
"The MLP has one hidden layer with 200 neurons.",
"ReLU activation and a learning rate of 10 3 are used.",
"Results.",
"As shown in Table 3, MM-Deacon outperforms other methods in terms of AUPR, precision and recall, with the maximum AUPR obtained when SMILES and IUPAC fingerprints were concatenated as input feature set.",
"Ensemble models (Zhang et al., 2017) included extra bioactivity related features in addition to drug structural properties.",
"DPDDI (Feng et al., 2020) encoded molecule graph with GNNs, from which latent features were concatenated for pairs of drugs and ingested into a deep neural network.",
"Table 4 shows the top 20 most potential interactions predicted by MM-Deacon (concat) in the non-interaction set (false positives), 13 out of which are confirmed as true positives by DrugBank 1 .",
"While, the number is 7/20 for ensemble models (Zhang et al., 2017).",
"After being pre-trained on 10 million molecules, MM-Deacon showed outstanding knowledge transfer capabilities to various downstream scenarios (Fig. 3) where a pre-trained model could be used.",
"The competitive performance on seven molecular property prediction tasks from MoleculeNet benchmark demonstrated the effectiveness of the pre-trained MM-Deacon when adopting a network fine-tuning scheme as shown in Fig.",
"3(a).",
"The evaluation results of zero-shot cross-lingual retrieval further revealed that MM-Deacon SMILES and IUPAC fingerprints shared a substantial amount of mutual information, implying that an IUPAC name can be used directly without first being translated to SMILES format as chemists have done in the past.",
"The DDI prediction task showed that MM-Deacon also allows directly using embeddings in the joint cross-modal space as molecular fingerprints for downstream prediction tasks, which is a widely used strategy in cheminformatics.",
"MM-Deacon profited from the alignment of two molecule languages with distinct forms of nomenclatures, as opposed to the baseline MLM[CLS] model, which was pre-trained on SMILES representation only.",
"Furthermore, we looked at molecule-level and token-level alignments of MM-Deacon to untangle the outcome of cross-lingual contrastive learning.",
"We used centered kernel alignment (CKA) (Ko-rnblith et al., 2019) with RBF kernel to compare representations between different layers.",
"In Fig.",
"7(a), the representations of 6 Transformer layers and the final projection layer were compared between MM-Deacon SMILES and IUPAC branches, where the representations differ in shallow layers, while reach a high level of alignment in deeper layers.",
"In Fig.",
"7(b), both the MM-Deacon SMILES branch and MLM[CLS] model take SMILES as the input, therefore the shallow layers have a high alignment score, while the representation varies as the network grows deeper.",
"Fig. 7 shows that MM-Deacon aligned SMILES and IUPAC representations effectively, and that molecular representations trained with SMILES and IUPAC differs from representations trained only on SMILES.",
"The cosine similarity matrix of MM-Deacon fingerprints between tokens from the IUPAC corpus and tokens from the SMILES corpus is shown in Fig. 8.",
"The table in Fig. 8 lists IUPAC tokens expressed in SMILES language, and the heat map demonstrates that there exists a good token-level alignment between SMILES and IUPAC.",
"In this study, we proposed a novel method for multilingual molecular representation learning that combines mutual information from SMILES-IUPAC joint training with a self-supervised contrastive loss.",
"We evaluated our approach for molecular property prediction, zero-shot cross-lingual retrieval, and DDI prediction.",
"Our results demonstrate that the self-supervised multilingual contrastive learning framework holds enormous possibilities for chemical domain exploration and drug discovery.",
"In future work, we plan to scale MM-Deacon pretraining to larger dataset sizes, as well as investigate the applicability of MM-Deacon to other types of molecule languages.",
"We would like to thank Min Xiao and Brandon Smock for some insightful discussions."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"other"
] |
[
"Spoken language understanding (SLU) requires a model to analyze input acoustic signal to understand its linguistic content and make predictions.",
"To boost the models' performance, various pre-training methods have been proposed to learn rich representations from large-scale unannotated speech and text.",
"However, the inherent disparities between the two modalities necessitate a mutual analysis.",
"In this paper, we propose a novel semi-supervised learning framework, SPLAT, to jointly pre-train the speech and language modules.",
"Besides conducting a self-supervised masked language modeling task on the two individual modules using unpaired speech and text, SPLAT aligns representations from the two modules in a shared latent space using a small amount of paired speech and text.",
"Thus, during fine-tuning, the speech module alone can produce representations carrying both acoustic information and contextual semantic knowledge of an input acoustic signal.",
"Experimental results verify the effectiveness of our approach on various SLU tasks.",
"For example, SPLAT improves the previous state-of-the-art performance on the Spoken SQuAD dataset by more than 10%.",
"Spoken language understanding (SLU) tackles the problem of comprehending audio signals and making predictions related to the content.",
"SLU has been widely employed in various areas such as intent understanding (Tur and De Mori, 2011; Bhargava et al., 2013; Ravuri and Stolcke, 2015; Lugosch et al., 2019), question answering (Lee et al., 2018; Chuang et al., 2020), and sentiment analysis (Zadeh et al., 2018).",
"Early approaches leverage a two-step pipeline: use automatic speech recognition (ASR) to transcribe input audio into text, and then employ language understanding models to produce Equal contribution.",
"The work was done when Yu-An Chung was interning at Microsoft.",
"results.",
"However, such cascaded system has several drawbacks.",
"First, the transcription produced by the ASR module often contains errors, which adversely affects the language understanding mod-ule's prediction accuracy.",
"Second, even if the transcription is perfect, the rich prosodic information of speech (e.g., tempo, pitch, and intonation) is inevitably lost after ASR.",
"In comparison, humans often leverage these information to better understand and disambiguate the content.",
"Therefore, there has been a rising trend of end-to-end approaches to retain information from audio signals to carry out the understanding task (Serdyuk et al., 2018; Chen et al., 2018; Haghani et al., 2018).",
"While end-to-end SLU methods are effective, they often suffer from a shortage of labeled training data, especially when the target task is in a novel domain.",
"One solution is to leverage self-supervised training as is done in pre-trained language models.",
"Examples like BERT (Devlin et al., 2019), GPT (Radford et al., 2018), and RoBERTa (Liu et al., 2019) are first pre-trained on large-scale unannotated text in a self-supervised fashion to learn rich textual representations before being fine-tuned on downstream tasks with a modest amount of labeled data.",
"Borrowing this idea, several pretraining methods have been proposed for speech, e.g., wav2vec (Schneider et al., 2019; Baevski et al., 2020a), contrastive predictive coding (Oord et al., 2018; Rivire et al., 2020), autoregressive predictive coding (Chung et al., 2019a, 2020; Chung and Glass, 2020b), and DeCoAR (Ling et al., 2020; Ling and Liu, 2020), to capture contextual representations from unlabeled speech data.",
"Nevertheless, these methods leverage only acoustic data and mainly focus on modeling the acoustic information during pre-training.",
"As a result, the produced representations may not be optimal for language understanding tasks.",
"SPLAT.",
"SPLAT contains a speech module and a language module for multi-modal understanding.",
"The speech module is a Transformer encoder trained from scratch and the language module is initialized from BERT.",
"Both modules leverage large-scale unannotated data for pre-training via masked language modeling.",
"In the speech module, each frame is seen as a token and is replaced with zero vector with a certain probability.",
"For each masked frame, we minimize the L1-distance between the predicted frame and the original frame.",
"Then, to make the speech module aware of the contextual information extracted from the language module, we design an alignment loss to align the representations from both modules in a shared latent semantic space.",
"In detail, we propose two alignment methods, a sequence-level one and a token-level one, that leverage a small amount of paired speech and text to minimize the disparity between the acoustic representations from the speech module and the textual representations from the language module.",
"In this way, the speech representations will carry not only the acoustic information but also the contextual knowledge from the text.",
"After this alignment, when text input is absent during fine-tuning, the speech module alone can produce representations that bridge the speech input and the language understanding output.",
"We conduct extensive evaluations on several downstream SLU tasks, including Fluent Speech Commands for intent detection, Switchboard for dialog act classification, CMU-MOSEI for spoken sentiment analysis, and Spoken SQuAD for spoken question answering.",
"SPLAT achieves superior results in all datasets.",
"For example, SPLAT improves the previous state-of-the-art performance on the Spoken SQuAD dataset by more than 10%.",
"Furthermore, we show that SPLAT can perform well even given just a tiny portion of the labeled training data in downstream tasks.",
"Spoken language understanding In recent years, due to its flexibility and effectiveness, end-to-end spoken language understanding (SLU) has been proposed and applied to various tasks (Qian et al., 2017; Serdyuk et al., 2018; Lugosch et al., 2019).",
"For instance, Qian et al. (2017) use an auto-encoder to initialize the SLU model.",
"Lugosch et al. (2019) pre-train the model to recognize words and phonemes, and then fine-tune it on downstream tasks.",
"Chen et al. (2018) pre-train the model to categorize graphemes, and the logits are fed into the classifier.",
"In most of these approaches, the model pre-training requires annotated speech, e.g., word or phonemes corresponding to audio signals.",
"As a result, the massive unlabeled speech data cannot be utilized by these models.",
"Self-supervised pre-training for language Pretrained models have achieved great success in both language and speech domains.",
"In language, BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), UniLM (Dong et al., 2019), and BART (Lewis et al., 2020) have been successfully applied to natural language inference (Zhang et al., 2020b), question answering (Zhu et al., 2018), and summarization (Zhu et al., 2019).",
"These pretrained models leverage self-supervised tasks such as masked language modeling (MLM), next sentence prediction, and de-noising autoencoder.",
"Self-supervised pre-training for speech In speech, wav2vec (Schneider et al., 2019) leverages contrastive learning to produce contextual representations for audio input; vq-wav2vec (Baevski et al., 2020a) and wav2vec 2.0 (Baevski et al., 2020b) further propose to discretize the original continuous audio signals in order to enable more effi-cient MLM training with Transformer (Vaswani et al., 2017).",
"Pre-trained speech models have been applied to ASR (Ling et al., 2020; Chung and Glass, 2020a; Baevski et al., 2020b), phoneme recognition (Song et al., 2020; Liu et al., 2020a), speech translation (Nguyen et al., 2020; Chung et al., 2019c), and speech synthesis (Chung et al., 2019b), to name a few.",
"Nevertheless, an SLU model must incorporate both acoustic and language understanding capabilities to project speech signals to semantic outputs.",
"Thus, a pre-trained model for SLU needs to address tasks beyond a single modality.",
"Speech and language joint pre-training Recently, SLU applications have prompted joint pretraining on both speech and text data.",
"SpeechBERT (Chuang et al., 2020) applies MLM to pairs of audio and transcripts.",
"However, there are several crucial differences to compared to our work.",
"First, SpeechBERT contains a phonetic-semantic embedding module that requires forced alignment to first segment speech into word segments to obtain.",
"Second, both the pre-training and fine-tuning phases of SpeechBERT require both speech and text input, since it is designed for a specific spoken question answering task.",
"However, many SLU tasks only take speech as input, which does not align with the design of SpeechBERT.",
"In contrast, our model can learn to align acoustic and textual representations using just (a small amount of) paired data during pre-training, and only needs speech input for downstream tasks.",
"Denisov and Vu (2020) propose to align speech and language embeddings in a method similar to ours.",
"However, there are several key differences.",
"First, Denisov and Vu (2020) employ the encoder of a pre-trained ASR model, which already requires plentiful of annotated speech to obtain.",
"Our model, on the other hand, conducts self-supervised learning to pre-train the speech module using unannotated speech.",
"Secondly, besides sequence-level alignment, we propose a token-level alignment method, which is suitable for token-level downstream tasks.",
"Last but not least, our model uses a much smaller paired speech and text for alignment (10 hours) than Denisov and Vu (2020) (1,453 hours), yet still largely outperforms their method in intent detection and dialog act classification.",
"In this section we present SPLAT, a framework for learning joint contextual representations of speech and language.",
"The model consists of a speech module and a language module that share a similar architecture and learning algorithm.",
"The pretraining of SPLAT is divided into two steps.",
"First, we individually pre-train the speech and language modules using unannotated speech and text, respectively.",
"Then, we leverage a simple yet effective alignment task that uses only a small amount of paired speech and text data to align the representations from both modules in a shared latent semantic space such that the information learned by the language module is transferred to the speech module.",
"After pre-training, the language module is discarded and only the speech module is used in downstream tasks.",
"Below we formally describe the procedures for pre-training the speech (3.1) and language modules (3.2), and the alignment loss (3.3) for aligning the representations from the two modules.",
"Figure 1 provides an overview of the pre-training procedures of SPLAT.",
"The goal of this module is to leverage unlabeled speech data to learn representations that capture meaningful acoustic information about speech utterances such as their phonetic content and speaker characteristics.",
"Formally, the input to the speech module is a 80-dimensional log Mel spectrogram, ( x 1 , ..., x n ) , where x i 2 R 80 , 1 i n .",
"The speech module, which is implemented as a Transformer architecture, then produces hidden representations ( s 1 , ..., s n ) and predictions ( x 1 , ..., x n ) , where s i 2 R 768 and x i 2 R 80 .",
"To boost its capacity for contextual understanding, we borrow the idea of masked language modeling (MLM) (Devlin et al., 2019; Liu et al., 2020c; Wang et al., 2020; Liu et al., 2020b).",
"Specifically, each audio frame x i is replaced with a zero vector with a probability of 15%.",
"The corresponding output x i is trained to be close to the original frame x i via minimizing their L1-distance.",
"Additionally, since consecutive frames are highly correlated, it is possible that the model simply utilizes the local smoothness of speech signals for reconstructing a single frame and thus fails to capture useful information.",
"To avoid such issue, when a frame x i is selected to be masked, its following three frames x i +1 , x i +2 , and x i +3 are also masked, and the model is asked to reconstruct all these masked frames.",
"Furthermore, according to SpecAugment (Park et al., 2019), the input features ( x 1 , ..., x n ) can be seen as comprising two dimensions: time, i.e., the subscript i , and channel, i.e., the elements in each x i .",
"While conventional MLM masks along certain time steps, the input signals can also be masked along the channel dimension.",
"In other words, each column vector [ x 1 ,j , ..., x n,j ] for 1 j 80 has a 15% of chance to be masked, i.e., replaced with a zero vector.",
"This channel masking is combined with temporal masking to reinforce the model's capability to utilize contextual information from both time and channel, and reduce the impact of co-adaptation between acoustic frames.",
"The final pre-training objective for the speech module is to reconstruct the entire input sequence from the altered version of it: L sp = X i =1 , 2 ,...,n k x i \u0000 x i k 1 (1) We use the speech portion of the train-clean-360 subset from the LibriSpeech corpus (Panayotov et al., 2015) to pre-train the speech module, i.e., to minimize L sp .",
"This subset contains 360 hours of read speech produced by 921 speakers.",
"We follow the standard Kaldi setting, using a frame size of 25ms and a time shift of 10ms for generating the 80-dimensional log Mel spectrograms.",
"The spectrograms are normalized to zero mean and unit variance per speaker.",
"The language module aims to offer contextual understanding for text input.",
"We directly employ the BERTBASE model released by Devlin et al. (2019), which is pre-trained on a large text corpus with the MLM task and contains rich textual representations, as the language module.",
"We denote the cross-entropy loss for the language MLM task as L text .",
"Given input token embeddings ( y 1 , ..., y m ) , where y 1 corresponds to the [CLS] token, the module produces contextual representations ( t 1 , ..., t m ) , where t j 2 R 768 , 1 j m .",
"The input to most SLU tasks consists of only audio signals, but the model is required to conduct semantic understanding, which can be best handled when",
"textual information is present.",
"Therefore, we propose to align the pre-trained speech and language representations in a shared semantic latent space.",
"Suppose a pair of speech and text data consisting of an acoustic feature sequence ( x 1 , ..., x n ) and its transcript ( y 1 , ..., y m ) .",
"The speech and language modules separately produce the output representations ( s 1 , ..., s n ) and ( t 1 , ..., t m ) .",
"We then propose two methods to align the embeddings from the modules: sequence-level and token-level alignment.",
"Sequence-level alignment For sequence-level alignment, we treat the first embeddings from the two output representations, i.e., s 1 and t 1 , as the sequence-level representations of their respective sequences, and minimize their L1-distance: L seq = k s 1 \u0000 t 1 k 1 (2) Since our goal is to transfer the textual knowledge contained by the language module to the speech module, we only update the speech module to minimize L seq and keep the language module fixed.",
"After pre-training, when the transcript is absent in downstream tasks, the first output embedding of the speech module s 1 will still be close to its corresponding text embedding t 1 from the language module, as if the transcript were given.",
"It follows that s 1 can then be used to predict the property of the whole audio input, e.g., intent classification.",
"Token-level alignment To achieve a finer level of alignment, each audio feature should be compared with its each text token.",
"Although forced alignment (Gorman et al., 2011) can establish this correspondence between audio signals and individual words, it requires a pre-trained ASR system to obtain.",
"Here we propose a method that automatically aligns audio features with textual tokens.",
"Inspired by BERTScore (Zhang et al., 2020a), for each output text embedding t j , we first compute its cosine similarity with each output acoustic embedding s i , and select the acoustic feature with the highest similarity.",
"Then, the alignment is performed by maximizing the sum of these maximum similarities over all tokens, weighted by each to-ken's inverse document frequency (idf) to reduce the impact of common words: L tok = \u0000 P mj =1 idf ( t j ) max i cossim ( s i , t j ) P mj =1 idf ( t j ) (3) The token-level alignment loss is illustrated in Figure",
"Algorithm 1 Pre-training SPLAT",
"Input: An unlabeled speech corpus X = { x ( p ) } Np =1 , an unlabeled text corpus Y = { y ( q ) } Mq =1 , and a paired speech-text corpus Z = { ( x ( k ) , y ( k ) ) } Kk =1 , where K N, M .",
"1: Use X to train the speech module by minimizing L sp (Equation 1).",
"2: Use Y to train the language module by minimizing L text (we directly employ BERTBASE from Devlin et al. (2019) for this step).",
"3: Use { y ( k ) } Kk =1 from Z to train the language module by minimizing L text .",
"4: Use Z to align the two modules by minimizing L seq (Equation 2) or L tok (Equation 3).",
"5: Discard the language module.",
"Output: The final speech module.",
"To minimize the alignment loss, we randomly sample 10 hours of audio paired with its transcripts from the train-clean-360 subset, of which the speech portion is used to pre-train the speech module ( 3.1).",
"In practice, before minimizing the alignment loss, we find it beneficial to train (i.e., minimize L text ) the language module initialized with BERTBASE with the 10-hour LibriSpeech transcripts with the MLM task.",
"This step allows the model to adapt to the speech domain and facilitates the following alignment task.",
"We summarize the complete procedure of pretraining SPLAT in Algorithm",
"1. After pre-training, the language module is discarded and only the speech module is used in downstream tasks.",
"We include a number of strong baselines from recent literature for each downstream task (Lugosch et al., 2019; Duran and Battle, 2018; Ghosal et al., 2018; Chuang et al., 2020).",
"We also compare with another speech-language joint pre-training framework (Denisov and Vu, 2020).",
"For each baseline, the reported performance is achieved by system that either uses similar or more amounts of data than our model.",
"To verify the effectiveness of each component in SPLAT, we experiment with the following variants of it, including whether to pre-train the model, Table 1: Variants of SPLAT.",
"SPLAT-Scratch : No pre-training is conducted at all.",
"Speech module is trained from scratch on downstream tasks.",
"SPLAT-Speech : Only the speech module is pre-trained.",
"Language module and alignment loss are not incorporated.",
"SPLAT-Seq : SPLAT with sequence-level alignment loss L seq , but language module is not trained on LibriSpeech transcripts with MLM before alignment.",
"SPLAT-Seq-MLM : SPLAT with sequence-level alignment loss L seq , and language module is trained on LibriSpeech transcripts with MLM before alignment.",
"SPLAT-Tok : SPLAT with token-level alignment loss L tok , but language module is not trained on LibriSpeech transcripts with MLM before alignment.",
"SPLAT-Tok-MLM : SPLAT with token-level alignment loss L tok , and language module is trained on LibriSpeech transcripts with MLM before alignment.",
"The speech module of SPLAT is a 3-layer Transformer encoder where each layer has a hidden size of 768 and 12 self-attention heads.",
"The language module is directly initialized from the pre-trained BERTBASE released by Devlin et al. (2019).",
"We evaluate our model on four different SLU applications: intent detection, dialog act classification, spoken sentiment analysis, and spoken question answering.",
"The first three belong to multi-class classification tasks, and the last one is a span prediction problem, which will be described in more detail below.",
"Table 2 summarizes the used dataset for each application.",
"For all datasets, we use 80-dimensional log Mel spectrograms as input acoustic features as in the pre-training stage.",
"Intent detection We use the Fluent Speech Commands corpus (FSC) (Lugosch et al., 2019) for intent detection, where the goal is to correctly predict the intent of an input utterance.",
"In this dataset, each utterance is annotated with three slots: action, object, and location, where each slot can take one of multiple values.",
"The combination of slot values is defined as the intent of the utterance, and there are 31 unique intents in total.",
"In this work we follow the original paper to formulate intent detection as a simple 31-class classification task.",
"Dialog act classification We use the NTX-format Switchboard corpus (SwDA) (Calhoun et al., 2010), a dialog corpus of 2-speaker conversations.",
"The goal is to correctly classify an input Table 3: Results on all downstream datasets.",
"Spoken sentiment analysis We use the CMU-MOSEI dataset (Zadeh et al., 2018), where each utterance is annotated for a sentiment score on a [ \u0000 3 , 3] Likert scale: [-3: highly negative, -2: negative, -1: weakly negative, 0: neutral, +1: weakly positive, +2: positive, +3: highly positive].",
"We treat the task as a 7-class classification problem.",
"And we only use audio signals in the input data.",
"For the above three tasks, during fine-tuning, an MLP network with one hidden layer of 512 units is appended on top of the speech module.",
"It converts the output representation of the first frame, i.e., s 1 , for class prediction.",
"Both the pre-trained speech module and the randomly initialized MLP are fine-tuned on the training set for 10 epochs with a batch size of 64 and a fixed learning rate of 3e-4.",
"We compute classification accuracy after each training epoch and pick the best-performing checkpoint on the validation set to report results on the test set.",
"Spoken question answering We use the Spoken SQuAD dataset (Li et al., 2018), which is augmented 1 from SQuAD (Rajpurkar et al., 2016) for spoken question answering.",
"The model is given an article in the form of speech and a question in the form of text.",
"The goal is to predict a time span in the spoken article that answers the question.",
"In other words, the model outputs an audio 1 Li et al. (2018) used Google text-to-speech to generate the spoken version of the articles in SQuAD.",
"segment extracted from spoken article as the answer.",
"The model is evaluated by Audio Overlapping Score (AOS) (Li et al., 2018): the greater the overlap between the predicted span and the ground-truth answer span, the higher the score will be.",
"During fine-tuning, given a spoken article and a question in the text form, the pre-trained speech module extracts audio representations of the article and pass them to a randomly initialized 3-layer Transformer encoder along with the tokenized textual question as input.",
"The Transformer then uses the self-attention mechanism to implicitly align elements of the input audio and textual features.",
"For each time step of the audio input, the Transformer is trained to predict whether this is the start of the span with a simple logistic regression.",
"A separate classifier is used for predicting the end of the span.",
"Table 3 shows the performance of models on all four downstream tasks.",
"Each number from our model is an average over three runs.",
"Based on the results, we make the following observations.",
"Firstly, compared with SPLAT-Scratch , all pretrained models achieve superior results, especially more than 30% gain on Spoken SQuAD, proving the effectiveness of pre-training.",
"Secondly, the inclusion of language module and the alignment task during pre-training is very beneficial.",
"For instance, on CMU-MOSEI, SPLAT FSC MOSEI Spoken SQuAD SwBD 0 25 50 75 100 0 25 50 75 100 0 25 50 75 100 0 25 50 75 100 30 40 50 60 70 10 20 30 40 50 60 20 40 60 0 25 50 75 100 Training data size (%) A cc u r acy SPLAT Seq MLM SPLAT Speech SPLAT Scratch Figure 3: Performance on downstream tasks with varying training data sizes.",
"Seq-MLM outperforms SPLAT-Speech by 5.7%, and outperforms several baseline systems from recent literature.",
"We argue that as SLU tasks require the model to interpret acoustic signals and their underlying semantics, the language module will guide the speech module towards a mutual understanding of both modalities via our alignment task.",
"Thirdly, updating the language module using MLM during pre-training is helpful.",
"Although the language module has been initialized with BERT, adaptation to the speech domain can help with semantic understanding in the downstream task.",
"Types of alignment Comparing SPLAT-Seq against SPLAT-Tok , we find that sequence-level alignment outperforms token-level alignment on all four tasks, although the latter is supposed to learn more fine-grained multi-modal representations.",
"We leave the investigations of reasons for such phenomenon and more advanced token-level alignment approaches for future work.",
"Low-resource scenario We experiment with a version of SPLAT that uses only 1 hour of transcribed speech randomly sampled from the LibriSpeech train-clean-360 subset for aligning speech and language modules, denoted as SPLAT-Seq-MLM 1-hour .",
"The language module of SPLAT-Seq-MLM 1-hour after being initialized with BERTBASE is trained on the 1-hour LibriSpeech transcripts before minimizing the alignment loss.",
"It achieves comparable results with the best variant SPLAT-Seq-MLM : same accuracy on FSC, 0.5% less on SwBD, and 0.6% less on Spoken SQuAD.",
"This shows that with a small amount of labeled speech data, our pre-training framework can achieve good results on downstream tasks.",
"As human labeling is time-consuming and labor-intensive, the amount of labeled training data for downstream tasks is often small and insufficient.",
"In this section, we show that with effective pretraining, the model will be less dependent on the amount of downstream labeled data.",
"We randomly sample 50%, 10%, 5%, and 1% of the training data in the downstream tasks, and evaluate the performance of different variants of SPLAT when fine-tuned on the sampled data.",
"Figure 3 shows the performance on all four downstream tasks with varying training data sizes.",
"We observe that among the variants, SPLAT-Seq-MLM is least sensitive to training data sizes.",
"For instance, in FSC, with only 10% of the training data, its accuracy only drops 0.4 points.",
"In comparison, both SPLAT-Scratch and SPLAT-Speech drops about 10 points.",
"And the gaps are in general larger when the size of training data further shrinks.",
"Therefore, our proposed joint pre-training of speech and language modules can help the model quickly adapt to downstream tasks given a modest amount of training data.",
"So far we have empirically demonstrated the effectiveness of SPLAT for learning multi-modal speech-language representations that are useful in various SLU tasks.",
"Here we further show that our sequence-level alignment loss (Equation 2) can help project two speech utterances that have similar textual embeddings to nearby points in the speech latent space.",
"Recall that we use the embedding of the first token/feature to represent an utterance and conduct sequence-level alignment (Equation 2).",
"Sup-Table 4: Average cosine similarity between all pairs of speech embeddings ( S avg ), and the average cosine similarity between a speech embedding s ( p ) 1 and that of an utterance whose textual embedding is closest to the corresponding textual embedding t ( p ) 1 ( S closest ).",
"pose t ( p ) 1 and s ( p ) 1 correspond to the textual and speech embeddings of the first utterance by SPLAT and t ( q ) 1 and s ( q ) 1 correspond to the embeddings of the second utterance.",
"Then, if t ( p ) 1 t ( q ) 1 , our SPLAT model trained with the sequence-level alignment loss will produce s ( p ) 1 s ( q ) 1 .",
"We use the dev-clean subset from the LibriSpeech corpus for the analysis.",
"First, we compute the average pairwise cosine similarity between the utterances of all speech embeddings: S avg = 1 K ( K \u0000 1) / 2 KX p =2 p \u0000 1 X q =1 cossim ( s ( p ) 1 , s ( q ) 1 ) , (4) where K is the number of utterances in dev-clean.",
"We then compute the cosine similarity between s ( p ) 1 and s ( q ) 1 and take the average of such value over all utterances in dev-clean: S closest = 1 KKX p =1 cossim ( s ( p ) 1 , s ( q ) 1 ) .",
"(5) We show the S avg and S closest of embeddings produced by SPLAT-Speech , SPLAT-Seq , and SPLAT-Seq-MLM in Table",
"4. We see that S avg is approximately the same for all model variants.",
"Next, for each utterance with its speech and textual embeddings denoted as s ( p ) 1 and t ( p ) 1 respectively, we first use t ( p ) 1 to retrieve the utterance with the most similar textual embedding t ( q ) 1 , i.e., q = argmax 1 q K,q 6 = p cossim ( t ( p ) 1 , t ( q ) 1 ) .",
"However, S closest , the average similarity between the speech embeddings of two linguistically similar utterances, increases from 0.238 to 0.781 after aligning the speech and language modules, and further increases to 0.829 after adapting the language module on LibriSpeech transcripts with MLM before the alignment.",
"Overall, SPLAT can make a pair of semantically similar utterances to have much closer speech embeddings, compared with other random pairs of utterances.",
"These results demonstrate that via an cross-modal alignment loss as simple as Equation 2, SPLAT can effectively transfer knowledge from the language module to the speech module to capture both acoustic and linguistic information of speech utterances.",
"Spoken language understanding (SLU) tasks require an understanding of the input audio signal and its underlying semantics.",
"In this paper, we present a novel speech-language joint pre-training framework, SPLAT, to carry out both speech and language understanding tasks during pre-training.",
"Besides a self-supervised training on the speech and language modules, we propose two methods to align the semantic representations from both modules using a modest amount of labeled speech data.",
"The speech module can quickly adapt to downstream tasks and achieve superior results on various SLU datasets including intent detection, dialog act classification, spoken sentiment analysis, and spoken question answering.",
"This joint pre-training also makes the model less sensitive to the amount of labeled training data in downstream domains.",
"For future work, we plan to integrate automatic speech recognition and natural language generation into our framework to achieve good results on spoken language generation tasks."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result"
] |
[
"The present work investigates whether different quantification mechanisms (set comparison, vague quantification, and proportional estimation) can be jointly learned from visual scenes by a multi-task computational model.",
"The motivation is that, in humans, these processes underlie the same cognitive, nonsymbolic ability, which allows an automatic estimation and comparison of set magnitudes.",
"We show that when information about lower-complexity tasks is available, the higher-level proportional task becomes more accurate than when performed in isolation.",
"Moreover, the multi-task model is able to generalize to unseen combinations of target/non-target objects.",
"Consistently with behavioral evidence showing the interference of absolute number in the proportional task, the multi-task model no longer works when asked to provide the number of target objects in the scene.",
"Understanding and producing sentences like There are more cars than parking lots', Most of the supporters wear blue t-shirts', Twenty percent of the trees have been planted last year', or Seven students passed the exam', is a fundamental competence which allows speakers to communicate information about quantities.",
"Crucially, the type of information conveyed by these expressions, as well as their underlying cognitive mechanisms, are not equivalent, as suggested by evidence from linguistics, language acquisition, and cognition.",
"First, comparatives (more', less'), quantifiers (some', most', all'), and proportions ( 20 %', two thirds') express a comparison or relation between sets (e.g., between the set of cars and the set of parking lots).",
"Such relational information is rather coarse when expressed by comparatives and vague quantifiers, more precise when denoted by proportions.",
"In contrast, numbers (one', six', twenty-two') denote the exact, absolute cardinality of the items belonging to one set (e.g., the set of students who passed the exam).",
"Second, during language acquisition, these expressions are neither learned at the same time nor governed by the same rules.",
"Recent evidence showed that children can understand comparatives at around 3 .",
"3 years (Odic et al., 2013; Bryant, 2017), with quantifiers being learned a few months later, at around 3 .",
"4 3 .",
"6 years (Hurewitz et al., 2006; Minai, 2006; Halberda et al., 2008).",
"Crucially, knowing the meaning of numbers, an ability that starts not before the age of 3 .",
"5 years (Le Corre and Carey, 2007), is not required to understand and use these expressions.",
"As for proportions, they are acquired significantly later, being fully mastered only at the age of 9 or 10 (Hartnett and Gelman, 1998; Moss and Case, 1999; Sophian, 2000).",
"Third, converging evidence from cognition and neuroscience supports the hypothesis that some important components of these expressions of quantity are grounded on a preverbal, nonsymbolic system representing magnitudes (Piazza, 2010).",
"This system, often referred to as Approximate Number System (ANS), is invariant to the sensory modality and almost universal in the animal domain, and consists in the ability of holistically extracting and comparing approximate numerosities (Piazza and Eger, 2016).",
"In humans, it is present since the youngest age, with 6 -month-old infants being able to automatically compare sets and combine them by means of proto-arithmetical operations (Xu and Spelke, 2000; McCrink and Wynn, 2004).",
"Since it obeys Weber's law, according to which highly differing sets (e.g. 2 : 8 ) are easier to discriminate than highly similar sets (e.g. 7 : 8 ), ANS has been recently claimed to be a ratio-based mechanism (Sidney et al., 2017; Matthews et al., 2016).",
"In support of this, behavioral findings indicate that, in non-symbolic 419 Figure 1: Toy representation of the quantification tasks and corresponding outputs explored in the paper.",
"contexts (e.g. visual scenes), proportional values are extracted holistically, i.e. without relying on the pre-computed cardinalities of the sets (Fab-bri et al., 2012; Yang et al., 2015).",
"Indeed, people are fairly accurate in providing the proportion of targets in a scene, even in high-speed settings (Healey et al., 1996; Treisman, 2006).",
"Similarly, in briefly-presented scenes, the interpretation of quantifiers is shown to be best described by proportional information (Pezzelle et al., under review).",
"Altogether, this suggests that performing ( 1 ) set comparison, ( 2 ) vague quantification, and ( 3 ) proportional estimation, which all rely on information regarding relations among sets, underlies increasingly-complex steps of the same mechanism.",
"Notably, such complexity would range from more/less' judgements to proportional estimation, as suggested by the increasing precision of ANS through years (Halberda and Feigen-son, 2008), the reported boundary role of half' in early proportional reasoning (Spinillo and Bryant, 1991), and the different age of acquisition of the corresponding linguistic expressions.",
"Finally, the ratio-based operation underlying these task would be different from (and possibly conflicting with) that of estimating the absolute numerosity of one set.",
"Indeed, absolute numbers are found to interfere with the access to proportions (Fabbri et al., 2012).",
"Inspired by this converging evidence, the present work proposes a computational framework to explore various quantification tasks in the visual domain (see Figure 1).",
"In particular, we investigate whether ratio-based quantification tasks can be modeled by a single, multi-task learning neural network.",
"Given a synthetic scene depicting animals (in our setting, the target' objects) and artifacts (non-target'), our model is designed to jointly perform all the tasks by means of an architecture that reflects their increasing complexity.",
"1 To perform proportional estimation (the most complex), the model builds on the representations learned to perform vague quantification and, in turn, set comparison (the least complex).",
"We show that the multi-task model achieves both higher accuracy and higher generalization power compared to the one-task models.",
"In contrast, we prove that introducing the absolute number task in the loop is not beneficial and indeed hurts the performance.",
"Our main contribution lies in the novel application and evaluation of a multi-task learning architecture on the task of jointly modeling 3 different quantification operations.",
"On the one hand, our results confirm the interdependency of the mechanisms underlying the tasks of set comparison, vague quantification, and proportional estimation.",
"On the other, we provide further evidence on the effectiveness of these computational architectures.",
"In recent years, the task of extracting quantity information from visual scenes has been tackled via Visual Question Answering (VQA).",
"Given a real image and a natural language question, a VQA computational model is asked to understand the image, the linguistic query, and their interaction to provide the correct answer.",
"So-called count questions, i.e. How many Xs have the property Y ?', are very frequent and have been shown to be particularly challenging for any model (Antol et al., 2015; Malinowski et al., 2015; Ren et al., 2015; Fukui et al., 2016).",
"The difficulty of the task has been further confirmed by the similarly poor performance achieved even on the diagnos-tic' datasets, which include synthetic visual scenes depicting geometric shapes (Johnson et al., 2017; Suhr et al., 2017).",
"Using Convolutional Neural Networks (CNN), a number of works in Computer Vision (CV) have proposed specific architectures for counting digits (Segu et al., 2015), people in the crowd (Zhang et al., 2015a), and penguins (Arteta et al., 2016).",
"With a more cognitive flavor, Chattopadhyay et al. (2017) employed a divide-and-conquer' strategy to split the image into subparts and count the objects in each subpart by mimicking the subitizing' mechanism (i.e. numerosities up to 3 4 can be rapidly and accurately appreciated).",
"Inspired by 1 The dataset and the code can be downloaded from github.com/sandropezzelle/multitask-quant 420 the same cognitive ability is Zhang et al. (2015b), who trained a CNN to detect and count the salient objects in the image.",
"Except Suhr et al. (2017), who evaluated models against various types of quantity expressions (including existential quanti-fiers), these works were just focused on the absolute number.",
"More akin to our work is Stoianov and Zorzi (2012), who showed that hierarchical generative models learn ANS as a statistical property of (syn-thetic) images.",
"Their networks were tested on the task of set comparison (more/less') and obtained 93% accuracy.",
"A few studies specifically focused on the learning of quantifiers.",
"Sorodoc et al. (2016) proposed a model to assign the correct quantifier to synthetic scenes of colored dots, whereas Sorodoc et al. (2018) operationalized the same task in a VQA fashion, using real images and object-property queries (e.g. How many dogs are black ?').",
"Overall, the results of these studies showed that vague quantification can be learned by neural networks, though the performance is much lower when using real images and complex queries.",
"Finally, Pezzelle et al. (2017) investigated the difference between the learning of cardinals and quantifiers from visual scenes, showing that they require two distinct computational operations.",
"To our knowledge, this is the first attempt to jointly investigate the whole range of quantification mechanisms.",
"Moreover, we are the first to exploit a multi-task learning paradigm for exploring the interactions between set comparison, vague quantification, and proportions.",
"Multi-Task Learning (MTL) has been shown to be very effective for a wide range of applications in machine learning (for an overview, see Ruder (2017)).",
"The core idea is that different and yet related tasks can be jointly learned by a multipurpose model rather than by separate and highly fine-tuned models.",
"Since they share representations between related (or auxiliary') tasks, multitask models are more robust and generalize better than single-task models.",
"Successful applications of MTL have been proposed in CV to improve object classification (Girshick, 2015), face detection and rotation (Zhang et al., 2014; Yim et al., 2015), and to jointly perform a number of tasks as object detection, semantic segmentation, etc. (Misra et al., 2016; Li and Hoiem, 2016).",
"Though, recently, a few studies applied MTL techniques to either count or estimate the number of objects in a scene (Sun et al., 2017; Sindagi and Patel, 2017), to our knowledge none of them were devoted to the learning of various quantification mechanisms.",
"In the field of natural language processing (NLP), MTL turned out to be beneficial for machine translation (Luong et al., 2016) and for a range of tasks such as chunking, tagging, semantic role labelling, etc. (Collobert et al., 2011; Sgaard and Goldberg, 2016; Bingel and Sgaard, 2017).",
"In particular, Sgaard and Goldberg (2016) showed the benefits of keeping low-level tasks at the lower layers of the network, a setting which enables higher-level tasks to make a better use of the shared representations.",
"Since this finding was also in line with previous evidence suggesting a natural order among different tasks (Shen and Sarkar, 2005), further work proposed MTL models in which several increasingly-complex tasks are hierarchically ordered (Hashimoto et al., 2017).",
"The intuition behind this architecture, referred to as joint many-task model' in the source paper (Hashimoto et al., 2017), as well as its technical implementation, constitute the building blocks of the model proposed in the present study.",
"Given a visual scene depicting a number of animals (targets) and artifacts (non-targets), we explore the following tasks, represented in Figure",
"(a) set comparison (hence, setComp ), i.e. judging whether the targets are more', same', less' than non-targets;",
"(b) vague quantification (hence, vagueQ ), i.e. predicting the probability to use each of the 9 quantifiers (none', almost none', few', the smaller part', some', many', most', almost all', all') to refer to the target set;",
"(c) proportional estimation (hence, propTarg ), i.e. predicting the proportion of targets choosing among 17 ratios, ranging from 0 to 100 %.",
"Tasks",
"(a) and",
"(c) are operationalized as classification problems and evaluated through accuracy.",
"That is, only one answer out of 3 and 17 , respectively, is considered as correct.",
"Given the vague status of quantifiers, whose meanings are fuzzy' and overlapping, task",
"(b) is evaluated by means 421 Figure 2: Two scenes included in our dataset.",
"of Pearson's correlation ( r ) between the predicted and the ground-truth probability vector (cf. 3.2), for each datapoint.",
"2 The overall r is obtained by averaging these scores.",
"It is worth mentioning that we could either evaluate",
"(b) in terms of a classification task or operationalize",
"(a) and",
"(c) in terms of a correlation with human responses.",
"The former evaluation is straightforward and can be easily carried out by picking the quantifier with the highest probability.",
"The latter, in contrast, implies relying on behavioral data assessing the degree of overlap between ground-truth classes and speak-ers' choice.",
"Though interesting, such evaluation is less crucial given the discrete, non-overlapping nature of the classes in tasks",
"(a) and",
"(c).",
"The tasks are explored by means of a MTL network that jointly performs the three quantification operations (see 4.2).",
"The intuition is that solving the lower-level tasks would be beneficial for tackling the higher-level ones.",
"In particular, providing a proportional estimation ( 80 %') after performing vagueQ (most') and setComp (more') should lead to a higher accuracy in the highest-level task, which represents a further step in complexity compared to the previous ones.",
"Moreover, lower-level tasks might be boosted in accuracy by the higher-level ones, since the latter include all the operations that are needed to carry out the former.",
"In addition to the MTL model, we test a number of one-task' networks specifically designed to solve one task at a time (see 4.1).",
"(see Figure 2).",
"In doing so, we employed the same methodology and materials used in Pezzelle et al. (under review), where the use of quantifiers in grounded contexts was explored by asking participants to select the most suitable quantifier for a given scene.",
"Since the category of animals was always treated as the target', and that of artifacts as the non-target', we will henceforth use this terminology throughout the paper.",
"The scenes were automatically generated by an in-house script using the following pipeline:",
"(a) Two natural images, one depicting a target object (e.g. a butter-fly) and one depicting a non-target (e.g. a mug) were randomly picked up from a sample of the dataset by Kiani et al. (2007).",
"The sample was obtained by Pezzelle et al. (under review), who manually selected pictures depicting whole items (not just parts) and whose color, orientation and shape were not deceptive.",
"In total, 100 unique instances of animals and 145 unique instances of artifacts were included;",
"(b) The proportion of targets in the scene (e.g. 20 %) was chosen by selecting one among 17 pre-defined ratios between targets:non-targets (e.g. 1 : 4 , four non-targets to one target').",
"Out of 17 ratios, 8 were positive (tar-gets > 50 %), 8 negative (targets < 50 %), and 1 equal (targets = 50 %);",
"(c) The absolute number of targets/non-targets was chosen to equally represent the various combinations available for a given ratio (e.g., for ratio 1 : 4 : 1 4 , 2 8 , 3 12 , 4 16 ), with the constraint of having a number of total objects in the scene (targets+non-targets) ranging from 3 to 20 .",
"In total, 97 combinations were represented in the dataset, with an average of 5 .",
"7 combina-tions/ratio (min 2 , max 18 );",
"(d) To inject some variability, the instances of target/non-target objects were randomly resized according to one of three possible sizes (i.e. medium, big, and small) and flipped on the vertical axis before being randomly inserted onto a 5 * 5 -cell virtual grid.",
"As reported in Table 1, 17 K scenes balanced per ratio ( 1 K scenes/ratio) were generated and further split into train ( 70 %), validation ( 10 %), and test ( 20 %).",
"Ground-truth classes for the tasks of setComp and propTarg were automatically assigned to each scene while generating the data.",
"For vagueQ, 422 we took the probability distributions obtained on a dataset of 340 scenes by Pezzelle et al. (un-der review) and we applied them to our datapoints, which were built in the exact same way.",
"These probability distributions had been collected by asking participants to select, from a list of 9 quantifiers (reported in 3.1), the most suitable one to describe the target objects in a visual scene presented for 1 second.",
"In particular, they were computed against the proportion of targets in the scene, which in that study was shown to be the overall best predictor for quantifiers.",
"To illustrate, given a scene containing 20% of targets (cf. leftmost panel in Figure 2), the probability of choosing few' (ranging from 0 to 1 ) is 0 .",
"38 , almost none' 0 .",
"27 , the smaller part' 0 .",
"25 , etc.",
"It is worth mentioning that, for scenes containing either 100 % or 0 % targets the probability of choosing all' and none', respectively, is around 1 .",
"In all other cases, the distribution of probabilities is fuzzier and reflects the largely overlapping use of quantifiers, as in the example above.",
"On average, the probability of the most-chosen quantifier across ratios is 0 .",
"53 .",
"Though this number cannot be seen as a genuine inter-annotator agreement score, it suggests that, on average, there is one quantifier which is preferred over the others.",
"In this section, we describe the various models implemented to perform the tasks.",
"For each model, several settings and parameters were evaluated by means of a thorough ablation analysis.",
"Based on a number of factors like performance, speed, and stability of the networks, we opted for using ReLU nonlinear activation at all hidden layers and the simple and effective Stochastic Gradient Descent (SGD) as optimizer (lr = 0 . 01 ).",
"We run each model for 100 epochs and saved weights and parameters of the epoch with the lowest validation loss.",
"The best model was then used to obtain the predictions in the test set.",
"All models were implemented using Keras.",
"3 4.1 One-Task Models We implemented separate models to tackle one task at a time.",
"For each task, in particular, both a network using frozen' (i.e. pretrained) visual features and one computing the visual features in an end-to-end' fashion were tested.",
"One-Task-Frozen These models are simple, 2 layer (ReLU) Multi-Layer Perceptron (MLP) networks that take as input a 2048 -d frozen representation of the scene and output a vector containing softmax probability values.",
"The frozen representation of the scene had been previously extracted using the state-of-art Inception v3 CNN (Szegedy et al., 2016) pretrained on ImageNet (Deng et al., 2009).",
"In particular, the network is fed with the average of the features computed by the last Convolutional layer, which has size 25 * 2048 .",
"One-Task-End2end These models are MLP networks that take as input the 203 * 203 -pixel image and compute the visual features by means of the embedded Inception v3 module, which outputs 25 * 2048 -d vectors (the grey and colored box in Figure 1).",
"Subsequently, the 25 feature vectors are reduced twice via ReLU hidden layers, then concatenated, reduced (ReLU), and fed into a softmax layer to obtain the probability values.",
"The multi-task-prop model performs 3 tasks at the same time with an architecture that reproduces in its order the conjectured complexity (see Figure 3 and its caption for technical details).",
"The model has a core structure, represented by layers 1 5 in the figure, which is shared across tasks and trained with multiple outputs.",
"In particular,",
"(a) layers 1 , 2 , and 3 are trained using information regarding the output of all 3 tasks.",
"That is, these layers are updated three times by as many back-propagation passes: One on the top of setComp output, the second on the top of vagueQ output, the third on the top of propTarg output;",
"(b) layers 4 and 5 are affected by information regarding the output of vagueQ and propTarg, and thus updated twice;",
"(c) layers 6 and 7 are updated once, on the top of the output of propTarg.",
"Importantly, the three lower layers in Figure 3 (concatenation, ReLU, softmax) are not shared between the tasks, but specialized to output each a specific prediction.",
"As can be noted, the order of the tasks reflects their complexity, since the last task in the pipeline has 2 more layers than the preceding one and 4 more than the first one.",
"Table 2 reports the performance of each model in the various tasks (note that the lowest row and the rightmost column report results described",
"in 6.1).",
"In setComp, all the models are neatly above chance/majority level ( 0 . 47 ).",
"In particular, the one-task-end2end model achieves a remarkable 0 .",
"90",
"acc., which is more than 10 % better compared to the simple one-task-frozen model ( 0 . 78 ).",
"The same pattern of results can be observed for vagueQ, where the Pearson's correlation ( r ) between the ground-truth and the predicted probability vector is around 0 .",
"96 , that is more than 30 % over the simpler model ( 0 . 62 ).",
"This gap increases even more in propTarg, where the accuracy of the frozen model is more than 40 points below the one achieved by the one-task-end2end model ( 0 . 21 against 0 . 66 ).",
"These results firmly indicate that, on the one hand, the frozen representation of the visual scene encodes little information about the proportion of targets (likely due to the the different task for which they were pretrained, i.e. object classification).",
"On the other hand, computing the visual features in an end-to-end fashion leads to a significant improvement, suggesting that the network learns to pay attention to features that are helpful for specific tasks.",
"The most interesting results, however, are those achieved by the multi-task model, which turns out to be the best in all the tasks.",
"As reported in Table 2, sharing the weights between the various tasks is especially beneficial for propTarg, where the accuracy reaches 0 .",
"92 , that is, more than 25 points over the end-to-end, one-task model.",
"An almost perfect performance of the model in this task can be observed in Figure 4, which reports the confusion matrix with the errors made by the model.",
"As can be seen, the few errors are between touching' classes, e.g. between ratio 3 : 4 ( 43 % of targets) and ratio 2 : 3 ( 40 %).",
"Since these classes 424 model setComp vagueQ propTarg nTarg accuracy Pearson r accuracy accuracy chance/majority 0.470 0.320 0.058 0.132 one-task-frozen 0.783 0.622 0.210 0.312 one-task-end2end 0.902 0.964 0.659 0.966 multi-task-prop 0.995 0.982 0.918 multi-task-number 0.854 0.807 0.478 Table 2: Performance of the models in the tasks of set comparison (setComp), vague quantification (vagueQ), proportional estimation (propTarg), and absolute number of targets (nTarg).",
"differ by a very small percentage, we gain indirect evidence that the model is learning some kind of proportional information rather than trivial associations between scenes and orthogonal classes.",
"To further explore this point, one way is to inspect the last layer of the proportional task (i.e. the 32-d turquoise vector in Figure 3).",
"If the vectors contain information regarding the proportion of targets, we should expect scenes depicting the same proportion to have a similar representation.",
"Also, scenes with similar proportions (e.g. 40 % and 43 %) would be closer to each other than are scenes with different proportions (e.g. 25 % and 75 %).",
"Figure 5 depicts the results of a two-dimensional PCA analysis performed on the vectors of the last layer of the proportional task (the 32 -d vectors).",
"4 As can be noted, scenes depicting the same proportion clearly cluster together, thus indicating that using these representations in a retrieval task would lead to a very high precision.",
"Crucially, the clusters are perfectly ordered with respect to proportion.",
"Starting from the purple cluster on the left side ( 90 %) and proceeding clockwise, we find 83 % (green), 80 % (turquoise), 4 We used https://projector.tensorflow.org/ Figure 4: PropTarg.",
"75 % (brown), and so on, until reaching 10 % (light blue).",
"Proportions 0 % (blue) and 100 % (yellow) are neatly separated from the other clusters, being at the extremes of the clock'.",
"An improvement in the results can be also observed for setComp and vaqueQ, where the model achieves 0 .",
"99 acc.",
"and 0 .",
"98 r , respectively.",
"Figure 6 reports, for each quantifier, the probability values predicted by the model against the ground-truth ones.",
"As can be seen, the red lines (model) approximate very closely the green ones (humans).",
"In the following section, we perform further experiments to provide a deeper evaluation of the results.",
"As discussed in 1, the cognitive operation underlying setComp, vagueQ, and propTarg is different compared to that of estimating the absolute number of objects included in one set.",
"To investigate whether such dissociation emerges at the computational level, we tested a modified version of our proposed multi-task model where propTarg task Figure 5: PCA visualization of the last layer (before softmax) of the proportional task in the MTL model.",
"has been replaced with nTarg, namely the task of predicting the absolute number of targets.",
"One-task models were also tested to evaluate the difficulty of the task when performed in isolation.",
"Since the number of targets in the scenes ranges from 0 to 20 , nTarg is evaluated as a 21 -class classification task (majority class 0 . 13 ).",
"As reported in Table 2, the accuracy achieved by the one-task-end2end model is extremely high, i.e. around 0 .",
"97 .",
"This suggests that, when learned in isolation, the task is fairly easy, but only if the features are computed within the model.",
"In fact, using frozen features results in a quite low accuracy, namely 0 .",
"31 .",
"This pattern of results is even more interesting if compared against the results of the multi-task-number model.",
"When included in the multi-task pipeline, in fact, nTarg has a huge, 50 -point accuracy drop ( 0 . 48 ).",
"Moreover, both setComp and vagueQ turn out to be significantly hurt by the highest-level task, and experience a drop of around 14 and 17 points compared to the one-task-end2end model, respectively.",
"These findings seem to corroborate the incompatibility of the operations needed for solving the tasks.",
"Previous work exploring MTL suggested that defining a hierarchy of increasingly-complex tasks is beneficial for jointly learning related tasks (see 2.2).",
"In the present work, the order of the tasks was inspired by cognitive and linguistic abilities (see 1).",
"Though cognitively implau-model setComp vagueQ propTarg accuracy Pearson r accuracy chance/majority 0.470 0.320 0.058 one-task-frozen 0.763 0.548 0.068 one-task-end2end 0.793 0.922 0.059 multi-task-prop 0.943 0.960 0.539 Table 3: Unseen dataset.",
"sible, it might still be the case that the model is able to learn even when reversing the order of the tasks, i.e. from the conjectured highest-level to the lowest-level one.",
"To shed light on this issue, we tested the multi-task-prop model after reversing its architecture.",
"That is, propTarg is now the first task, followed by vagueQ, and setComp.",
"In contrast with the pattern of results obtained by the original pipeline, no benefits are observed for this version of MTL model compared to one-task networks.",
"In particular, both vagueQ ( 0 . 32 r ) and propTarg ( 0 . 08 acc.) performance are around chance level, with setComp reaching just 0 .",
"65",
"acc., i.e. 25 point lower than the one-task-end2end model.",
"The pipeline of increasing complexity motivated theoretically is thus confirmed at the computational level.",
"As discussed in 2.2, MTL is usually claimed to allow a higher generalization power.",
"To investigate whether our proposed multi-task-prop model genuinely learns to quantify from visual scenes, and not just associations between patterns and classes, we tested it with unseen combinations of targets/non-targets.",
"The motivation is that, even in the most challenging propTarg task, the model might learn to match a given combination, e.g. 3 : 12 , to a given proportion, i.e. 20 %.",
"If this is the case, the model would solve the task by learning just to assign a class to each of the 97 possible combinations included in the dataset.",
"If it learns a more abstract representation of the proportion of targets depicted in the scene, in contrast, it should be able to generalize to unseen combinations.",
"We built an additional dataset using the exact same pipeline described in 3.2.",
"This time, however, we randomly selected one combination per ratio ( 17 combinations in total) to be used only for validation and testing.",
"The remaining 80 combinations were used for training.",
"A balanced number of datapoints for each combination were generated in val/test, whereas datapoints in training set 426 were balanced with respect to ratios, by randomly selecting scenes among the remaining combinations.",
"The unseen dataset included around 14 K datapoints ( 80 % train, 10 % val, 10 % test).",
"Table 3 reports the results of the models on the unseen dataset.",
"Starting from setComp, we note a similar and fairly high accuracy achieved by the two one-task models ( 0 . 76 and 0 . 79 , respectively).",
"In vagueQ, in contrast, the one-task-end2end model neatly outperforms the simpler model ( 0 . 92 vs. 0 . 55 r ).",
"Finally, in propTarg both models are at chance level, with an accuracy that is lower than 0 .",
"07 .",
"Overall, this pattern of results suggests that propTarg is an extremely hard task for the separate models, which are not able to generalize to unseen combinations.",
"The multi-task-prop model, in contrast, shows a fairly high generalization power.",
"In particular, it achieves 0 .",
"54 acc.",
"in propTarg, that is, almost 10 times chance level.",
"The overall good performance in predicting the correct proportion can be appreciated in Figure 7, where the errors are represented by means of a heatmap.",
"The error analysis reveals that end-of-the-scale proportions ( 0 % and 100 %) are the easiest, followed by proportions 75 % ( 3 : 1 ), 67 % ( 2 : 1 ), 50 % ( 1 : 1 ), and 60 % ( 3 : 2 ).",
"More in general, negative ratios (targets < 50 %) are mispredicted to a much greater extent than are positive ones.",
"Moreover, the model shows a bias toward some proportions, that the model seems to see everywhere'.",
"However, the fact that the errors are found among the adjacent ratios (similar proportions) seems to be a convincing evidence that the model learns representations encoding genuine proportional information.",
"Finally, it is worth mentioning that in setComp and vagueQ the model achieves very high results, 0 .",
"94 acc.",
"and 0 .",
"96 r , respectively.",
"In the present study, we investigated whether ratio-based quantification mechanisms, expressed in language by comparatives, quantifiers, and proportions, can be computationally modeled in vision exploiting MTL.",
"We proved that sharing a common core turned out to boost the performance in all the tasks, supporting evidence from linguistics, language acquisition, and cognition.",
"Moreover, we showed",
"(a) the increasing complexity of the tasks,",
"(b) the interference of absolute number, and",
"(c) the high generalization power of MTL.",
"These results lead to many additional questions.",
"For instance, can these methods be successfully applied to datasets of real scenes?",
"We firmly believe this to be the case, though the results might be affected by the natural biases contained in those images.",
"Also, is this pipeline of increasing complexity specific to vision (non-symbolic level), or is it shared across modalities, in primis language?",
"Since linguistic expressions of quantity are grounded on a non-symbolic system, we might expect that a model trained on one modality can be applied to another, at least to some extent.",
"Even further, jointly learning representations from both modalities might represent an even more natural, human-like way to learn and refer to quantities.",
"Further work is needed to explore all these issues.",
"We kindly acknowledge Gemma Boleda and the AMORE team (UPF), Raquel Fernandez and the Dialogue Modelling Group (UvA) for the feedback, advice and support.",
"We are also grateful to Aurelie Herbelot, Stephan Lee, Manuela Piazza, Sebastian Ruder, and the anonymous reviewers for their valuable comments.",
"This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 715154).",
"We gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used for this research.",
"This paper reflects the authors' view only, and the EU is not responsible for any use that may be made of the information it contains."
] | [
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"objective",
"objective",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"To improve the training efficiency of hierarchical recurrent models without compromising their performance, we propose a strategy named as the lower the simpler, which is to simplify the baseline models by making the lower layers simpler than the upper layers.",
"We carry out this strategy to simplify two typical hierarchical recurrent models, namely Hierarchical Recurrent Encoder-Decoder (HRED) and R-NET, whose basic building block is GRU.",
"Specifically, we propose Scalar Gated Unit (SGU), which is a simplified variant of GRU, and use it to replace the GRUs at the middle layers of HRED and R-NET.",
"Besides, we also use Fixed-size Ordinally-Forgetting Encoding (FOFE), which is an efficient encoding method without any trainable parameter, to replace the GRUs at the bottom layers of HRED and R-NET.",
"The experimental results show that the simplified HRED and the simplified R-NET contain significantly less trainable parameters, consume significantly less training time, and achieve slightly better performance than their baseline models.",
"With the advance of various deep learning frameworks, neural network based models proposed for natural language understanding tasks are becoming increasingly complicated.",
"To the best of our knowledge, a considerable part of these complicated models are both hierarchical and recurrent.",
"For example, Hierarchical Recurrent Encoder-Decoder (HRED) (Sordoni et al., 2015; Serban et al., 2016), which is a conversational model, is constructed by stacking three layers of GRUs (Cho et al., 2014).",
"Besides, several well-known Machine Reading Comprehension (MRC) models, such as R-NET (Wang et al., 2017) and Fusion-Net (Huang et al., 2017), are mainly composed of multiple layers of bidirectional GRUs (BiGRUs) or bidirectional LSTMs (BiLSTMs) (Hochreiter and Schmidhuber, 1997).",
"The above hierarchical recurrent models have achieved excellent performance, but training them usually consumes a lot of time and memory, that is because their computational graphs contain a large amount of operators and trainable parameters, which makes their training computationally expensive.",
"According to Williams and Zipser (1995), in the training of recurrent neural networks, it is the backward propagation rather than the forward propagation that consumes the majority of the computational resources.",
"Besides, considering the chain rule in the backward propagation, the complexity of computing gradients for a hierarchical recurrent model increases exponentially from the top layer of the model down to the bottom layer.",
"Therefore, to improve the training efficiency of hierarchical recurrent models, our strategy is to simplify the baseline models by making the lower layers simpler than the upper layers , which we name as the lower the simpler .",
"Here sim-pler means containing less operators and trainable parameters.",
"This strategy is guaranteed to work, since it can accelerate the computation of gradients, which is the substance of the backward propagation.",
"However, there is still a big concern: once the baseline models are simplified, will their performance be compromised?",
"To address this concern, we carry out our proposed strategy to simplify two typical hierarchical recurrent models, namely HRED and R-NET, whose basic building block is GRU.",
"Specifically, we propose Scalar Gated Unit (SGU), which is a simplified variant of GRU, and use it to replace the GRUs at the middle layers of HRED and R-NET.",
"Besides, we also use Fixed-size Ordinally-Forgetting Encoding (FOFE) (Zhang et al., 2015), which is an efficient encoding method without any trainable parameter, to replace the GRUs at the bottom layers of HRED and R-NET.",
"In the experiments, we separately compare the simplified HRED and the simplified R-NET with their baseline models in terms of both the training efficiency and the performance.",
"The experimental results show that the simplified models contain significantly less trainable parameters, consume significantly less training time, and achieve slightly better performance than their baseline models.",
"Hierarchical Recurrent Encoder-Decoder (HRED) is a conversational model for building end-to-end dialogue systems.",
"Since a dialogue is a sequence of sentences, where each sentence is a sequence of words, HRED models this hierarchy with a hierarchical recurrent structure.",
"Specifically, HRED consists of three layers of GRUs, which from bottom to top separately serve as the sentence-level encoder, the dialogue-level encoder, and the decoder.",
"The sentence-level encoder GRU iteratively takes the embeddings of the words in a sentence to update its hidden state, thus its final hidden state is a representation of the sentence.",
"The dialogue-level encoder GRU iteratively takes the representations of the sentences in a dialogue to update its hidden state, thus its hidden state at each time-step is a representation of the current dialogue.",
"The decoder GRU takes the current dialogue representation to initialize its hidden state so as to generate a response sentence word by word.",
"R-NET is an end-to-end MRC model that predicts an answer span for each given passage-question pair.",
"Specifically, R-NET consists of five layers, which from bottom to top are separately the embedding layer, the encoding layer, the matching layer, the self-matching layer, and the output layer.",
"The embedding layer maps the words to the word-level embeddings and the character-level embeddings.",
"The character-level embeddings are generated by processing the character embeddings of the words with a BiGRU and concatenating the forward GRU final hidden states and the backward GRU final hidden states.",
"The encoding layer processes the concatenation of the word-level embeddings and the character-level embeddings with another BiGRU and concatenates the forward GRU outputs and the backward GRU outputs so as to generate the context representations.",
"The matching layer uses a gated attention-based BiGRU to fuse the context representations of the question into those of the passage so as to generate the question-aware passage representations.",
"The self-matching layer uses another gated attention-based BiGRU to fuse the question-aware passage representations into themselves so as to generate the final passage representations.",
"On this basis, the output layer uses a pointer network (Vinyals et al., 2015) to generate an answer span.",
"Just like LSTM, GRU is a recurrent structure that leverages gating mechanisms to capture long-term dependencies in sequential data:",
"Update Gate: z t = ( W z [ h t 1 , x t ]) Reset Gate: r t = ( W r [ h t 1 , x t ]) New Memory: h t = tanh ( W h [ r t (cid:12) h t 1 , x t ]) Hidden State: h t = (1 z t ) (cid:12) h t 1 + z t (cid:12) h t",
"Researchers have proposed several simplified variants of GRU.",
"For example, Zhou et al. (2016) proposed Minimal Gated Unit (MGU), which combines the update gate and the reset gate into a single forget gate.",
"Compared with GRU, MGU contains less trainable parameters, consumes less training time, and achieves similar performance.",
"However, in this paper, to better carry out our proposed the lower the simpler strategy, we propose Scalar Gated Unit (SGU), which is an even more simplified variant of GRU: Scalar Update Gate: z t = ( w z [ h t 1 , x t ]) Scalar Reset Gate: r t = ( w r [ h t 1 , x t ]) New Memory: h t = tanh ( W h [ r t h t 1 , x t ]) Hidden State: h t = (1 z t ) h t 1 + z t h t By comparing the formulation of SGU with that of GRU, it is easy to see that both the update gate z t and the reset gate r t change from the vectors in GRU to the scalars in SGU.",
"Accordingly, the weights for generating the gates change from the matrices W z and W r in GRU to the vectors w z and w r in SGU.",
"Besides, the gating operator also changes from the element-wise multiplication (cid:12) in GRU to the scalar multiplication in SGU.",
"Therefore SGU is guaranteed to be the simplest among all the variants of GRU.",
"Fixed-size Ordinally-Forgetting Encoding (FOFE) is an encoding method that uses the following recurrent structure to map a varied-length word sequence to a fixed-size representation:",
"where h t is the hidden state at time step t , x t is the embedding of the t -th word, and ( 0 < < 1 ) is the forgetting factor that decays the previous hidden state.",
"Given a word sequence of length N , the final hidden state h N is a fixed-size representation of the word sequence.",
"Although formulated as a recurrent structure, FOFE can actually be implemented with an efficient matrix multiplication.",
"Besides, the forgetting factor is designed as a hyper-parameter so that FOFE contains no trainable parameter.",
"Therefore FOFE is guaranteed to be the simplest among all the recurrent structures.",
"As for the performance, according to Zhang et al. (2015), FOFE based language models outperform their LSTM based competitors.",
"According to the above descriptions, SGU is simpler than GRU, and FOFE is simpler than SGU.",
"Therefore, now we can carry out our proposed the lower the simpler strategy by using SGUs and FOFEs to replace certain GRUs in HRED and R-NET.",
"For HRED, we keep the decoder GRU at the top layer unchanged, use a SGU to replace the dialogue-level encoder GRU at the middel layer, and use a FOFE to replace the sentence-level encoder GRU at the bottom layer.",
"For R-NET, we keep the output layer, the self-matching layer, and the matching layer unchanged, use a bidirectional SGU (BiSGU) to replace the BiGRU that generates context representations at the encoding layer, and use a bidirectional FOFE (BiFOFE, i.e., running FOFE both forward and backward) to replace the BiGRU that generates character-level embeddings at the embedding layer.",
"After conducting the above replacements, we finally obtain a simplified HRED and a simplified R-NET.",
"Dialogue Datasets.",
"We compare the simplified HRED with the baseline HRED on two dialogue datasets, namely MovieTriples (Serban et al., 2016) and Ubuntu (Lowe et al., 2017).",
"MovieTriples contains over 240 , 000 dialogues collected from various movie scripts, with each dialogue consisting of three sentences.",
"Ubuntu contains over 490 , 000 dialogues collected from the Ubuntu chat-logs, with each dialogue consisting of seven sentences on average.",
"Both MovieTriples and Ubuntu have been randomly partitioned into three parts: a training set ( 80% ), a development set ( 10% ), and a test set ( 10% ).",
"MRC Dataset.",
"We compare the simplified R-NET with the baseline R-NET on an MRC dataset, namely SQuAD (Rajpurkar et al., 2016).",
"SQuAD contains over 100 , 000 passage-question pairs with human-generated answer spans, where the passages are collected from Wikipedia, and the answer to each question is guaranteed to be a fragment in the corresponding passage.",
"Besides, SQuAD has also been randomly partitioned into three parts: a training set ( 80% ), a development set ( 10% ), and a test set ( 10% ).",
"Both the training set and the development set are publicly available, but the test set is confidential.",
"HRED.",
"We implement both the simplified HRED and the baseline HRED with TensorFlow (Abadi et al., 2016).",
"For the word embeddings, we set their size to 200 , 400 , and 600 on MovieTriples and 600 on Ubuntu, initialize them randomly, and update them during the training.",
"For the forgetting factor of FOFE, we set it to 0 .",
"9 on both MovieTriples and Ubuntu.",
"For the hidden state size of the sentence-level encoder GRU, we set it to 200 , 400 , and 600 on MovieTriples and 600 on Ubuntu.",
"For the hidden state size of the dialogue-level encoder GRU and SGU, we set it to 1200 on both MovieTriples and Ubuntu.",
"For the hidden state size of the decoder GRU, we set it to 200 , 400 , and 600 on MovieTriples and 600 on Ubuntu.",
"For model optimization, we apply the Adam (Kingma and Ba, 2014) optimizer with a learning rate of 0 .",
"0001 and a mini-batch size of 32 .",
"For performance evaluation, we use both perplexity and error rate as evaluation metrics.",
"R-NET.",
"We implement both the simplified R-NET and the baseline R-NET with TensorFlow.",
"For the word-level embeddings, we initialize them with the 300 -dimensional pre-trained GloVe (Penning-ton et al., 2014) vectors, and fix them during the Model Word Embedding Hidden States (bottom-up) Trainable Parameters Training Time (secs * epochs) Performance (ppl, err rate) Baseline HRED 200 200-1200-200 10,777,003 4,100 * 33 35.72, 66.62% 400 400-1200-400 18,740,403 4,660 * 29 34.35, 66.13% 600 600-1200-600 28,223,803 5,700 * 29 34.11, 65.95% Simplified HRED 200 200-1200-200 6,456,605 2,030 * 35 35.14, 66.46% 400 400-1200-400 12,019,605 2,210 * 30 34.01, 66.05% 600 600-1200-600 18,142,605 2,590 * 29 33.79, 65.89% Table 1: Comparing the simplified HRED with the baseline HRED on MovieTriples.",
"training.",
"For the character embeddings, we initialize them with the same pre-trained GloVe vectors, and update them during the training.",
"For the forgetting factor of FOFE, we set it to 0 .",
"7 .",
"For the hidden state size of both the BiGRUs and the BiSGU, we set it to 128 .",
"For model optimization, we apply the Adam optimizer with a learning rate of 0 .",
"0005 and a mini-batch size of 32 .",
"For performance evaluation, we use both Exact Match (EM) and F1 score as evaluation metrics, which are calculated on the development set.",
"For model comparison in the training efficiency, we use the same hardware (i.e., Intel Core i7-6700 CPU and NVIDIA GeForce GTX 1070 GPU) to train both the baseline models and the simplified models.",
"The experimental results show that our proposed the lower the simpler strategy improves the training efficiency of both HRED and R-NET without compromising their performance.",
"On the one hand, as shown in Table 1 and Table 2, the simplified HRED contains 25% 35% less trainable parameters, consumes over 50% less training time, and achieves slightly better performance than the baseline HRED.",
"Besides, Table 1 also shows that appropriately scaling up the model brings better performance but consumes more resource, which implies that the simplified HRED will perform better than the baseline HRED when time or memory is limited.",
"On the other hand, as shown in Table 3, the simplified R-NET contains 13% less trainable parameters, consumes 21% less training time, and achieves slightly better performance than the baseline R-NET.",
"In this paper, we propose a strategy named as the lower the simpler, which is aimed at improving the training efficiency of hierarchical recurrent models without compromising their performance.",
"This strategy has been verified on two typical hierarchical recurrent models, namely HRED and R-NET, where we replace their middle layers and bottom layers with two simpler recurrent structures.",
"The significance of this paper lies in that it reveals a methodology for avoiding unnecessary complexity in training hierarchical recurrent models, which we believe is applicable to many other hierarchical recurrent models.",
"This work is partially supported by a research donation from iFLYTEK Co., Ltd., Hefei, China, and a discovery grant from Natural Sciences and Engineering Research Council (NSERC) of Canada."
] | [
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"other"
] |
[
"Distance-based knowledge graph embeddings have shown substantial improvement on the knowledge graph link prediction task, from TransE to the latest state-of-the-art RotatE .",
"However, complex relations such as N-to-1, 1-to-N and N-to-N still remain challenging to predict.",
"In this work, we propose a novel distance-based approach for knowledge graph link prediction.",
"First we extend the RotatE from 2D complex domain to high dimensional space with orthogonal transforms to model relations.",
"The orthogonal transform embedding for relations keeps the capability for modeling symmetric/anti-symmetric, inverse and compositional relations while achieves better modeling capacity.",
"Second, the graph context is integrated into distance scoring functions directly.",
"Specifically, graph context is explicitly modeled via two directed context representations.",
"Each node embedding in knowledge graph is augmented with two context representations, which are computed from the neighboring outgoing and incoming nodes/edges respectively.",
"The proposed approach improves prediction accuracy on the difficult N-to-1, 1-to-N and N-to-N cases.",
"Our experimental results show that it achieves state-of-the-art results on two common benchmarks FB15k-237 and WNRR-18, especially on FB15k-237 which has many high in-degree nodes.",
"Code available at https://github.",
"com/JD-AI-Research-Silicon-Valley/ KGEmbedding-OTE .",
"Knowledge graph is a multi-relational graph whose nodes represent entities and edges denote relationships between entities.",
"Knowledge graphs store facts about people, places and world from various sources.",
"Those facts are kept as triples (head entity, relation, tail entity) and denoted as ( h, r, t ) .",
"A large number of knowledge graphs, such as Freebase (Bollacker et al., 2008), DBpedia (Auer et al., 2007), NELL (Carlson et al., 2010) and YAGO3 (Mahdisoltani et al., 2013), have been built over the years and successfully applied to many domains such as recommendation and question answering (Bordes et al., 2014; Zhang et al., 2016).",
"However, these knowledge graphs need to be updated with new facts periodically.",
"Therefore many knowledge graph embedding methods have been proposed for link prediction that is used for knowledge graph completion.",
"Knowledge graph embedding represents entities and relations in continuous vector spaces.",
"Started from a simple and effective approach called TransE (Bordes et al., 2013), many knowledge graph embedding methods have been proposed, such as TransH (Wang et al., 2014), DistMult (Yang et al., 2014), ConvE (Dettmers et al., 2018) to the latest RotatE (Sun et al., 2019) and QuatE (Zhang et al., 2019).",
"Though much progress has been made, 1-to-N, N-to-1, and N-to-N relation predictions (Bor-des et al., 2013; Wang et al., 2014) still remain challenging.",
"In Figure 1, relation profession demonstrates an N-to-N example and the corresponding edges are highlighted as green.",
"Assuming the triple (SergeiRachmaninoff, Profession, Pianist) is unknown.",
"The link prediction model takes SergeiRachmaninoff and relation Profession and rank all entities in the knowledge graph to predict Pianist.",
"Entity SergeiRachmaninoff connected to multiple entities as head entity via relation profession, while Pianist as a tail entity also reaches to multiple entities through relation profession.",
"It makes the N-to-N prediction hard because the mapping from certain entity-relation pair could lead to multiple different entities.",
"Same issue happens with the case of 1-to-N and N-to-1 predictions.",
"The recently proposed RotatE (Sun et al., 2019) Figure 1: Snapshot of knowledge graph in FB15k-237.",
"In this work, a novel distance-based knowledge graph embedding called orthogonal transform embedding ( OTE ) with graph context is proposed to alleviate the 1-to-N, N-to-1 and N-to-N issues, while keeps the desired relation patterns as RotatE .",
"First, we employ orthogonal transforms to represent relations in high dimensional space for better modeling capability.",
"The Orthogonal transform embedding also models the symmetry/antisymmery, inversion and compositional relation patterns just as RotatE does.",
"RotatE can be viewed as an orthogonal transform in 2D complex space.",
"Second, we integrate graph context directly into the distance scoring, which is helpful to predict 1-to-N, N-to-1 and N-to-N relations.",
"For example, from the incomplete knowledge graph, people find useful context information, such as (SergeiRach-maninoff, role, Piano) and (SergeiRachmaninoff, Profession, Composer) in Figure",
"1. In this work, each node embedding in knowledge graph is augmented with two graph context representations, computed from the neighboring outgoing and incoming nodes respectively.",
"Each context representation is computed based on the embeddings of the neighbouring nodes and the corresponding relations connecting to these neighbouring nodes.",
"These context representations are used as part of the distance scoring function to measure the plausibility of the triples during training and inference.",
"We show that OTE together with graph context modeling performs consistently better than RotatE on the standard benchmark FB15k-237 and WN18RR datasets.",
"",
"A new orthogonal transform embedding OTE , is proposed to extend RotatE from 2D space to high dimensional space, which also models sym-metry/antisymmery, inversion and compositional relation patterns; A directed graph context modeling method is proposed to integrate knowledge graph context (including both neighboring entity nodes and relation edges) into the distance scoring function; Experimental results of OTE on standard benchmark FB15k-237 and WN18RR datasets show consistent improvements over RotatE , the state of art distance-based embedding model, especially on FB15k-237 with many high in-degree nodes.",
"On WN18RR our results achieve the new state-of-the-art performance.",
"Knowledge graph embedding could be roughly categorized into two classes (Wang et al., 2017): distance-based models and semantic matching models.",
"Distance-based model is also known as additive models, since it projects head and tail entities into the same embedding space and the distance scoring between two entity embeddings is used to measure the plausibility of the given triple.",
"TransE (Bordes et al., 2013) is the first and most representative translational distance model.",
"A series of work is conducted along this line such as TransH (Wang et al., 2014), TransR (Lin et al., 2015) and TransD (Ji et al., 2015) etc.",
"RotatE (Sun et al., 2019) further extends the computation into complex domain and is currently the state-of-art in this category.",
"On the other hand, Semantic matching models usually take multiplicative score functions to compute the plausibility of the given triple, such as DistMult (Yang et al., 2014), ComplEx (Trouillon et al., 2016), ConvE (Dettmers et al., 2018), TuckER (Balazevic et al., 2019) and QuatE (Zhang et al., 2019).",
"ConvKB (Nguyen et al., 2017) and CapsE (Nguyen et al., 2019) further took the triple as a whole, and fed head, relation and tail embeddings into convolutional models or capsule networks.",
"The above knowledge graph embedding methods focused on modeling individual triples.",
"However, they ignored knowledge graph structure and did not take advantage of context from neighbouring nodes and edges.",
"This issue inspired the usage of graph neural networks (Kipf and Welling, 2016; Velickovic et al., 2017) for graph context modeling.",
"Encoder-decoder framework was adopted in (Schlichtkrull et al., 2017; Shang et al., 2019; Bansal et al., 2019).",
"The knowledge graph structure is first encoded via graph neural networks and the output with rich structure information is passed to the following graph embedding model for prediction.",
"The graph model and the scoring model could be end-to-end trained together, or the graph encoder output was only used to initialize the entity embedding (Nathani et al., 2019).",
"We take another approach in this paper: we integrate the graph context directly into the distance scoring function.",
"Orthogonal transform is considered to be more stable and efficient for neural networks (Saxe et al., 2013; Vorontsov et al., 2017).",
"However, to optimize a linear transform with orthogonal property reserved is not straightforward.",
"Soft constraints could be enforced during optimization to encourage the learnt linear transform close to be orthogonal.",
"Bansal et al. (2018) extensively compared different orthogonal regularizations and find regularizations make the training faster and more stable in different tasks.",
"On the other hand, some work has been done to achieve strict orthogonal during optimization by applying special gradient update scheme.",
"Harandi and Fernando (2016) proposed a Stiefel layer to guarantee fully connected layers to be orthogonal by using Reimannian gradients.",
"Huang et al. (2017) consider the estimation of orthogonal matrix as an optimization over multiple dependent stiefel manifolds problem and solve it via eigenvalue decomposition on a proxy parameter matrix.",
"Vorontsov et al. (2017) applied hard constraint on orthogonal transform update via Cayley transform.",
"In this work, we construct the orthogonal matrix via Gram Schmidt process and the gradient is calculated automatically through autograd mechanism in PyTorch (Paszke et al., 2017).",
"We consider knowledge graph as a collection of triples D = {( h, r, t )} with V as the graph node set, and R as the graph edge set.",
"Each triple has a head entity h and tail entity t , where h, t V .",
"Relation r R connects two entities with direction from head to tail.",
"As discussed in the introduction section, 1-to-N, N-to-1 and N-to-N relation prediction (Bordes et al., 2013; Wang et al., 2014) are difficult to deal with.",
"They are addressed in our proposed approach by: 1) orthogonal relation transforms that operate on groups of embedding space.",
"Each group is modeled and scored independently, and the final score is the sum of all group scores.",
"Hence, each group could address different aspects of entity-relation pair and alleviate the 1-to-N and N-to-N relation mapping issues; and 2) directed graph context to integrate knowledge graph structure information to reduce the ambiguity.",
"Next, we first briefly review RotatE that motivates our orthogonal transform embedding ( OTE ), and then describe the proposed method in details.",
"OTE is inspired by RotatE (Sun et al., 2019).",
"In RotatE , the distance scoring is done via Hadamard production (element-wise) defined on the complex domain.",
"Given a triple ( h, r, t ) , the corresponding embedding are e h , r , e t , where e h and e t R 2 d , r R d , and d is the embedding dimension.",
"For each dimension i , e [ 2 i ] and e [ 2 i + 1 ] are corresponding real and imaginary components.",
"The projection e t of t from corresponding relation and head entities is conducted as an orthogonal transform as below: [ e t [ 2 i ] e t [ 2 i + 1 ]] = M r ( i ) [ e h [ 2 i ] e h [ 2 i + 1 ]] = [ cos r ( i ) sin r ( i ) sin r ( i ) cos r ( i ) ] [ e h [ 2 i ] e h [ 2 i + 1 ]] where M r ( i ) is a 2D orthogonal matrix derived from r .",
"Though RotatE is simple and effective for knowledge graph link prediction, it is defined in 2 D complex domain and thus has limited modeling capability.",
"A natural extension is to apply similar operation on a higher dimensional space.",
"We use e h , M r , e t to represent embeddings of head, relation and tail entity, where e h , e t R d , and d is the dimension of the entity embedding.",
"The entity embedding e x , where x = { h, t } , is further divided into K sub-embeddings, e.g., e x = [ e x ( 1 ) ; ; e x ( K )] , where e x ( i ) R d s and d = K d s .",
"M r is a collection of K linear transform matrix M r = { M r ( 1 ) , , M r ( K )} , and M r ( i ) R d s d s .",
"For each sub-embedding e t ( i ) of tail t , we define the projection from h and r to t as below: e t ( i ) = f i ( h, r ) = ( M r ( i )) e h ( i ) (1) where is the Gram Schmidt process (see details in Section 3.3) applied to square matrix M r ( i ) .",
"1 is re-written as e t ( i ) = diag ( exp ( s r ( i ))) ( M r ( i )) e h ( i ) (2) Then, the corresponding distance scoring function is defined as d (( h, r ) , t ) = K i = 1 ( e t ( i ) e t ( i )) (3) For each sub-embedding e h ( i ) of head h , we define the projection from r and t to h as below: e h ( i ) = diag ( exp ( s r ( i ))) ( M r ( i )) T e t ( i ) (4) where the reverse project from tail to head is simply transposing the ( M r ( i )) and reversing the sign of s r .",
"The output transform ( M r ( i )) is an orthogonal matrix derived from M r ( i ) .",
"e t is the concatenation of all sub-vector e t ( i ) from Eq.",
"1, e.g., e t = f ( h, r ) = [ e t ( 1 ) ; ; e t ( K )] .",
"The L 2 norm of e h ( i ) is preserved after the orthogonal transform.",
"We further use a scalar tensor s r ( i ) R d s to scale the L 2 norm of each group of embedding separately.",
"Eq.",
"Then, the corresponding distance scoring function is defined as d ( h, ( r, t )) = K i = 1 ( e h ( i ) e h ( i )) .",
"We employ Gram-Schmidt process to orthogonalize a linear transform into an orthogonal transform (i.e., ( M r ( i )) in Section 3.2).",
"The Gram-Schmidt process takes a set of tensor S = { v 1 , , v k } for k d s and generates an orthogonal set S = { u 1 , , u k } that spans the same k dimensional subspace of R d s as S .",
"t i = v k k 1 j = 1 v k , t j t j , t j t j (6) u i = t i t i (7) where t 1 = v 1 , t is the L 2 norm of vector t and v, t denotes the inner product of v and t .",
"Orthogonal transform has many desired properties, for example, the inverse matrix is obtained by simply transposing itself.",
"It also preserves the L 2 norm of a vector after the transform.",
"For our work, we are just interested in its property to obtain inverse matrix by simple transposing.",
"This saves the number of model parameters (see Table 3).",
"It can be easily proved that OTE has the ability to model and infer all three types of relation patterns: symmetry/antisymmetry, inversion, and composition as RotatE does.",
"The proof is listed in Appendix A. It should be noted that, M r ( i ) is calculated every time in the neural networks forward computation to get orthogonal matrix ( M r ( i )) , while the corresponding gradient is calculated and propagated back to M r ( i ) via autograd computation within PyTorch during the backward computation.",
"It eliminates the need of special gradient update schemes employed in previous hard constraint based orthogonal transform estimations (Harandi and Fernando, 2016; Vorontsov et al., 2017).",
"In our experiments, we initialize M r ( i ) to make sure they are with full rank 1 .",
"During training, we also keep checking the determinant of M r ( i ) .",
"We find the update is fairly 1 A real random matrix has full rank with probability 1 (Slinko, 2000).",
"We use different random seeds to make sure the generated matrix is full rank.",
"stable that we don't observe any issues with sub-embedding dimensions varied from 5 to 100 .",
"The knowledge graph is a directed graph: valid triple ( h, r, t ) does not mean ( t, r, h ) is also valid.",
"Therefore, for a given entity in knowledge graph, there are two kinds of context information: nodes that come into it and nodes that go out of it.",
"Specially, in our paper, for each entity e , we consider the following two context settings:",
"1. If e is a tail, all the (head, relation) pairs in the training triples whose tail is e are defined as Head Relation Pair Context",
".",
"2. If e is a head, all the (relation, tail) pairs in the training triples whose head is e are defined as Relation Tail Pair Context .",
"Figure 1 demonstrates the computation of graph context for a testing triple (SergeiRachmaninoff, profession, Pianist).",
"Edges for relation profession are colored as green.",
"Entities marked with are head entities to entity Pianist, and these entities and corresponding relations to connect Pianist form the head relation pair context of Pianist.",
"While entities with are tail entities for entity SergeiRachmaninoff.",
"Those entities and corresponding relations are the relation tail graph context of entity SergeiRachmaninoff.",
"For a given tail t , all head-relation pairs ( h , r ) of the triples with tail as t are considered as its graph context and denoted as Ng ( t ) .",
"First, we compute the head-relation context representation e ct as the average from all these pairs in Ng ( t ) as below: e ct = ( h ,r ) Ng ( t ) f ( h , r ) + e t Ng ( t ) + 1 (8) where e t is the embedding of the tail t , f ( h , r ) is the representation of ( h , r ) induced from Eq.",
"2. We use e t in Eq.",
"8 to make the computation of context representation possible when Ng ( t ) is empty.",
"This can be viewed as a kind of additive smoothing for context representation computation.",
"Then, we compute the distance of the head-relation context of t and the corresponding orthogonal transform based representation of a triple ( h, r, t ) as follow.",
"where e t ( i ) is computed from Eq.",
"2. There is no new parameter introduced for the graph context modeling, since the message passing is done via OTE entity-relation project f ( h , r ) .",
"The graph context can be easily applied to other translational embedding algorithms, such as RotatE and TransE etc, by replacing OTE .",
"For a given head h , all relation-tail pairs ( r , t ) of the triples with head as h are considered as its graph context and denoted as Ng ( h ) .",
"First, we compute the relation-tail context representation e ch as the average from all these pairs in Ng ( h ) as below: e ch = ( r ,t ) Ng ( h ) f ( r , t ) + e h Ng ( h ) + 1 (10) where f ( r , t ) is computed from Eq.",
"4.",
"Then, we compute the distance of the relation-tail context of h and the corresponding orthogonal transform based representation of a triple ( h, r, t ) as follow.",
"We further combine all four distance scores (Eq. 3, Eq. 5, Eq. 9 and Eq. 11) discussed above as the final distance score of the graph contextual orthogonal transform embedding ( GC-OTE ) for training and inference",
"Therefore the full GC-OTE model can be seen as an ensemble of K local GC-OTE models.",
"This view provides an intuitive explanation for the success of GC-OTE.",
"Optimization Self-adversarial negative sampling loss (Sun et al., 2019) is used to optimize the embedding in this work, L = p ( h , r, t ) log ( d all ( h , r, t ) ) log ( d all ( h, r, t )) (13) where is a fixed margin, is sigmoid function, ( h , r, t ) is negative triple, and p ( h , r, t ) is the negative sampling weight defined in (Sun et al., 2019).",
"Two commonly used benchmark datasets (FB15k-237 and WN18RR) are employed in this study to evaluate the performance of link prediction.",
"FB15k-237 (Toutanova and Chen, 2015) dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs.",
"The knowledge base triples are a subset of the FB15K (Bordes et al., 2013), originally derived from Freebase.",
"The inverse relations are removed in FB15k-237.",
"WN18RR (Dettmers et al., 2018) is derived from WN18 (Bordes et al., 2013), which is a subset of WordNet.",
"WN18 consists of 18 relations and 40,943 entities.",
"However, many text triples obtained by inverting triples from the training set.",
"Thus WN18RR (Dettmers et al., 2018) is created to ensure that the evaluation dataset does not have test leakage due to redundant inverse relation.",
"Each dataset is split into three sets for: training, validation and testing, which is same with the setting of (Sun et al., 2019).",
"The statistics of two data sets are summarized at Table",
"1. Only triples in the training set are used to compute graph context.",
"Following the evaluation protocol in (Dettmers et al., 2018; Sun et al., 2019), each test triple ( h, r, t ) is measured under two scenarios: head focused ( ? , r, t ) and tail focused ( h, r, ? ) .",
"For each case, the test triple is ranked among all triples with masked entity replaced by entities in knowledge graph.",
"Those true triples observed in either train/validation/test set except the test triple will be excluded during evaluation.",
"Top 1, 3, 10 (Hits@1, Hits@3 and Hits@10), and the Mean Reciprocal Rank (MRR) are reported in the experiments.",
"Hyper-parameter settings The hyper-parameters of our model are tuned by grid search during training process, including learning rate, embedding dimension d and sub-embedding dimension d s .",
"In our setting, the embedding dimension is defined as the number of parameters in each entity embedding.",
"Each entity embedding consists of K sub-embeddings with dimension d s , i.e., d = K d s .",
"There are two steps in our model training: 1) the model is trained with OTE or RotatE models, and 2) graph context based models are fine tuned on these pre-trained models.",
"The parameter settings are selected by the highest MRR with early stopping on the validation set.",
"We use the adaptive moment (Adam) algorithm (Kingma and Ba, 2014) to train the models.",
"Specially, for FB15k-237, we set embedding dimension d = 400 , sub-embedding dimension d s = 20 , and the learning rates to 2 e 3 and 2 e 4 for pre-training and fine-tuning stages respectively; for WN18RR dataset, we set d = 400 , d s = 4 , and the learning rates to 1 e 4 and 3 e 5 for pre-training and fine-tuning stages.",
"Implementation Our models are implemented by PyTorch and run on NVIDIA Tesla P40 Graphics Processing Units.",
"The pre-training OTE takes 5 hours with 240,000 steps and fine-tuning GC-OTE takes 23 hours with 60,000 steps.",
"Though, it takes more computation for graph context based model training, the inference could be efficient if both head and tail context representations are precomputed and saved for each entity in the knowledge graph.",
"In this section, we first present the results of link prediction, followed by the ablation study and error analysis of our models.",
"Table 2 compares the proposed models ( OTE and graph context based GC-OTE ) to several state-of-the-art models: including translational distance based TransE (Bordes et al., 2013), RotatE (Sun et al., 2019); semantic matching based DistMult (Yang et al., 2014), ComplEx (Trouil-lon et al., 2016), ConvE (Dettmers et al., 2018), TuckER (Balazevic et al., 2019) and QuatE (Zhang et al., 2019), and graph context information based R-GCN+ (Schlichtkrull et al., 2017), SACN (Shang et al., 2019) and A2N (Bansal et al., 2019).",
"These Model FB15k-237 WN18RR MRR H1 H3 H10 MRR H1 H3 H10 TransE .294 -.465 .226 -.501 RotatE .338 .241 .375 .533 .476 .428 .492 .571 DistMult .241 .155 .263 .419 .43 .39 .44 .49 ComplEx .247 .158 .275 .428 .44 .41 .46 .51 ConvE .325 .237 .356 .501 .43 .40 .44 .52 QuatE .348 .248 .382 .550 .488 .438 .508 .582 TurkER .358 .266 .392 .544 .470 .443 .482 .526 R-GCN+ .249 .151 .264 .417 --SACN .352 .261 .385 .536 .47 .43 .48 .54 A2N .317 .232 .348 .486 .45 .42 .46 .51 OTE .351 .258 .388 .537 .485 .437 .502 .587 GC-OTE .361 .267 .396 .550 .491 .442 .511 .583 Table 2: Link prediction for FB15k-237 and WN18RR on test sets.",
"From Table 2, we observe that: 1) on FB15k-237, OTE outperforms RotatE , and GC-OTE outperforms all other models on all metrics.",
"Specifically MRR is improved from 0 .",
"338 in RotatE , to 0 .",
"361 , about 7 % relative performance improvement.",
"OTE which increases sub-embedding dimension from 2 to 20 , and graph context each contributes about half the improvement; 2) on WN18RR, OTE outperforms RotatE and GC-OTE achieves the new state-of-the-art results (as far as we know from published papers).",
"These results show the effectiveness of the proposed OTE and graph context for the task of predicting missing links in knowledge graph.",
"Moreover, GC-OTE improves more on FB15k-237 than on WN18RR.",
"This is because FB15k-237 has richer graph structure context compared to WN18RR: an average of 19 edges per node v.s. 2 edges per node in WN18RR.",
"These results indicate that the proposed method GC-OTE is more effective on data set with rich context structure information.",
"Table 3 shows the results of ablation study of the proposed models and compares the number of model parameters with RotatE on FB15k-237 validation set.",
"We perform the ablation study with embedding dimension of 400 .",
"The entity embedding dimension for RotatE-S and RotatE-L are 400 and 2000 , respectively.",
"First we notice that increasing embedding size from 400 to 2000 makes RotatE model size more than quadrupled while the performance gain is very limited (Row 1 and 2 in Table 3); increasing group embedding size from 2 to 20 does not increase the model size of OTE much, but with nice performance gain (Row 3 and 4 in Table 3).",
"The model size of OTE is less than one-third of the size of RotatE-L but with better performance.",
"This shows the effectiveness of the OTE .",
"Impact of sub-embedding dimension : we fix the embedding dimension as 400 , and increase the sub-embedding dimension d s from 2 to 20 , the MRR of OTE is improved from 0.327 to 0.355 (See Row 3 and Row 4).",
"For RotatE , the entity is embedded in complex vector space, this is similar to our setting with sub-embedding dimension",
"= 2. Our results show that increasing the sub-dimension with OTE is beneficial to link prediction.",
"Impact of orthogonal transform : we replace the orthogonal transform operation in OTE with two different settings, 1) removing the diagonal scalar tensor as Eq.",
"1 (See OTE-scalar ) and 2) using normal linear transform rather than orthogonal transform (See LNE ).",
"Both settings lead to MRR degradation.",
"This indicates the proposed orthogonal transform is effective in modeling the relation patterns which are helpful for link prediction.",
"Impact of graph context : we add the graph context based model to both OTE (See GC-OTE ) and RotatE-L (See GC-RotatE-L ).",
"We observe that MRRs are improved for both RotatE-L and OTE .",
"This shows the importance of modeling context information for the task of link prediction.",
"Sub-embedding dimension size: in Table 3 we show that increasing sub-embedding dimension brings a nice improvement on MRR.",
"Is the larger size always better?",
"Figure 2 shows the impact of d s on the OTE performance with the changing of sub-embedding size.",
"We fix the entity embedding dimension as 400 , and vary the sub-embedding size from 2 , 5 , 10 , 20 , 50 , all the way to 100 .",
"The blue line and green bar represent MRR and H @10 value, respectively.",
"From Figure 2 we observe that, both MRR and Hit@10 are improved and slowly saturated around d s = 20 The similar experiments are also conducted on WN18RR data set and we find the best sub-embedding dimension is 4 on WN18RR.",
"RotatE-L GC-OTE Type Num.",
"H T A H T A 1-to-N 2255 .710 .169 .440 .718 .204 .461 N-to-1 5460 .156 .850 .503 .209 .863 .536 N-to-N 9763 .490 .631 .561 .508 .651 .579 Table 4: H@10 from FB15-237 validation set by categories (1-to-N, N-to-1 and N-to-N).",
"We present error analysis of the proposed model on 1-to-N, N-to-1 and N-to-N relation predictions on FB15k-237.",
"Table 4 shows results in terms of Hit@10, where Num. is the number of triples in the validation set belonging to the corresponding category, H/T represents the experiment to predict head entity /tail entity, and A denotes average result for both H and T.",
"Assume c ( h, r ) and c ( r, t ) are the number of ( h, r ) and ( r, t ) pairs appeared in triples from the training set respectively.",
"A triple ( h, r, t ) from the validation set is considered as one of the categories in the following: ( h, r, t ) = N-to-1 , if c ( h, r ) > 1 and c ( r, t ) 1 1-to-N , if c ( h, r ) 1 and c ( r, t ) > 1 N-to-N , if c ( h, r ) > 1 and c ( r, t ) > 1 other.",
"From Table 4 we observe that, comparing to RotatE large model, the proposed model get better Hit@10 on all cases, especially for the difficult cases when we attempt to predicting the head entity for 1-to-N/N-to-N relation type, and tail entity in N-to-1/N-to-N relation type.",
"The reason is because that in the proposed model, the groupings of sub-embedding relation pairs in OTE and graph context modeling both are helpful to distinguish N different tails/heads when they share the same (head, rel)/(rel, tail).",
"In this paper we propose a new distance-based knowledge graph embedding for link prediction.",
"It includes two-folds.",
"First, OTE extends the modeling of RotatE from 2D complex domain to high dimensional space with orthogonal relation transforms.",
"Second, graph context is proposed to integrate graph structure information into the distance scoring function to measure the plausibility of the triples during training and inference.",
"The proposed approach effectively improves prediction accuracy on the difficult N-to-1, 1-to-N and N-to-N link predictions.",
"Experimental results on standard benchmark FB15k-237 and WN18RR show that OTE improves consistently over RotatE , the state-of-the-art distance-based embedding model, especially on FB15k-237 with many high in-degree nodes.",
"On WN18RR our model achieves the new state-of-the-art results.",
"This work is partially supported by Beijing Academy of Artificial Intelligence (BAAI)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain"
] |
[
"We present an incremental syntactic representation that consists of assigning a single discrete label to each word in a sentence, where the label is predicted using strictly incremental processing of a prefix of the sentence, and the sequence of labels for a sentence fully determines a parse tree.",
"Our goal is to induce a syntactic representation that commits to syntactic choices only as they are incrementally revealed by the input , in contrast with standard representations that must make output choices such as attachments speculatively and later throw out conflicting analyses.",
"Our learned representations achieve 93.72 F1 on the Penn Treebank with as few as 5 bits per word, and at 8 bits per word they achieve 94.97 F1, which is comparable with other state of the art parsing models when using the same pre-trained embeddings.",
"We also provide an analysis of the representations learned by our system, investigating properties such as the interpretable syntactic features captured by the system and mechanisms for deferred resolution of syntactic ambiguities.",
"Language comprehension in humans is, to a nontrivial extent, an incremental process.",
"Human speech is heard word by word, and, while the precise nature of the incrementality is not a settled question, a listener does not wait for a full sentence to end before any processing or understanding can begin.",
"In contrast, some of the highest-performing machine models for syntactic parsing operate precisely in this manner: they require a full sentence as input, and perform deeply bidirectional processing to produce their outputs.",
"Human capabilities suggest that we should also be able to build accurate parsers that instead operate incrementally.",
"one word at a time, and after each word output any number of actions such as shift or reduce , where the full sequence of actions represents a syntactic analysis of the input.",
"However, in this paper we are interested in a stronger notion of incrementality, which we refer to as non-speculative incrementality .",
"We say that a representation is speculative when a symbol in the representation encodes a commitment to a certain syntactic decision, but the evidence for that decision is not present in the corresponding prefix of the input.",
"Transition-based systems are frequently speculative; we give an example sentence in Figure 1, where a decision must be made regarding whether the preposition on attaches to noun proposal or the verb approved.",
"Transition-based approaches such as shift-reduce or attach-juxtapose (Yang and Deng, 2020) place the action that determines the preposition attachment earlier in the left-to-right processing pattern than the disambiguating word (Monday or taxes) that reveals the correct analysis.",
"Similarly in CCG parsing, the representation of a CCG analysis in the form of a sequence of supertags is likewise speculative including for this same example, where the 3086 correct supertag for each word cannot be predicted based on only that word and its preceding context.",
"The speculative nature of incremental transition systems or CCG supertags makes it impractical to recover an accurate parse by simply committing to the highest-scoring option at each each point where a decision must be made.",
"An incremental parser in the speculative paradigms must instead consider multiple analyses in parallel, and later throw out analyses that are inconsistent with the sentence; this can be done through a procedure like beam search.",
"In other words, the true representation of syntactic information at each point in the sentence is not a single sequence of actions (or supertags), but rather a belief state (beam state) that contains multiple candidate analyses.",
"In the limit of infi-nite beam size, the parser ceases to be incremental: its belief state can contain all reasonable analyses, deferring all choices to the very end of a sentence.",
"Our goal in this work is to design a representation for parsing that is maximally speculation free .",
"In other words, it should record commitments to syntactic choices only as they are incrementally revealed by the input.",
"We additionally want our representation to relate to constituency trees in a similar way to how transition-based actions relate to them: that is, through a deterministic transformation function.",
"A sequence of shift-reduce or attach-juxtapose actions is not identical to a parse tree, but it can be mapped to a tree using a deterministic automaton that interprets the discrete actions as operations on a tree fragment or stack of tree fragments.",
"A sequence of supertags is likewise not the same as a tree, and mapping it to a tree requires further processing in the form of finding and applying an appropriate series of combinators.",
"These mappings are non-trivial, especially in the case of CCG, so we should not expect our mapping to be trivial either our only requirement is that it be deterministic and operate entirely from our representation, having already discarded the raw text in the sentence.",
"Finally, we would like our representation to take the familiar form of a sequence of discrete symbols.",
"We propose to arrive at such a representation through end-to-end learning , rather than manual construction.",
"The model can then make its own decisions about when syntactic decisions take place, how to handle cases of ambiguity, and how to represent belief states within the learned system itself.",
"This system will learn to encode linguistic and structural features to allow effective incremental parsing.",
"Our end-to-end approach is a model that proceeds in two stages.",
"The first stage maps from individual words to syntactic decisions, which are represented as discrete tokens drawn from a small, bounded vocabulary.",
"The second component of our system is a read-out network that takes a sequence of discrete tags as input and produces a conventional parse tree as output.",
"Both stages are trained jointly in an end-to-end manner.",
"Crucially, we do not a priori assign meaning to the discrete tokens (e.g. actions like shift , or supertags like in CCG); we only specify the total number of symbols available to the model to control the complexity of the representation.",
"Unlike a speculative system, our representation can be used by finding the single highest-scoring tag at each position in a sentence, and then converting the resulting sequence of tags into a tree.",
"Important properties that we evaluate for our proposed approach are its quality (as measured by F1 score on the Penn Treebank), as well as compactness (how many bits per word are required to encode syntactic information).",
"At 5 bits per word, a parser using our representations achieves 93.72 F1, and at 8 bits per word it achieves 94.97 F1 comparable to the method of Kitaev et al. (2019) trained with the same pre-trained embeddings.",
"We further provide an analysis of the symbols learned by our model, including explorations of the linguistic features captured by the symbol set, the information content of our incremental representation for prefixes of a full utterance, and the system's ability to defer resolution of attachment ambiguities.",
"This work is inspired by the concept of incremental parsing implemented in works such as Larchevque (1995) and Lane and Henderson (2001).",
"With regards to neural parsers, recent strides in incremental parsing include the attach-juxtapose parsers from Yang and Deng (2020).",
"However, these neural models often have incremental tree construction mechanisms, but are not incremental from the raw input level due to reliance on pretrained bidirectional models such as the works of Devlin et al. (2019) and Yang et al. (2019).",
"The placement of an information bottleneck on token representations has also been studied in the bidirectional case by Li and Eisner (2019), who reported many similar findings about the syntactic features learned by discrete tags.",
"However, our model differs in that it explores the incremental, non-speculative case, as well as in the implementation of the parsing model and its constraints on representation size.",
"Our incremental parsing system can be compared to manually formulated representations such as shift-reduce or CCG supertagging.",
"However, for purely incremental parsing, limitations of shift-reduce and CCG supertagging may necessitate the use of beam search to produce more accurate, viable parse trees, as in the works of Zhu et al. (2013) and Bhargava and Penn (2020).",
"Other works have also analyzed the discrete features useful for syntactic parsing.",
"Some researchers augmented parsing models by adding discrete, hand-coded indicator features based on the raw sentence as in Hall et al. (2014).",
"Similar hand-coded, discrete features have been shown to improve other tasks such as NMT (Sennrich and Haddow, 2016).",
"Previous experiments by Gaddy et al. (2018) have analyzed whether neural parsers based on bidirectional LSTMs capture other handmade indicator functions from earlier hypotheses by Petrov and Klein (2007).",
"By contrast, our model seeks to directly learn new features, and in fact, many of the hand-made indicators from previous works arise naturally in the learned symbols of our model.",
"There also exists work examining the learned grammatical rules of a stack-based recurrent neural network via analysis of an attention mechanism (Kuncoro et al., 2017).",
"By contrast, our analysis has a lesser focus on the attention distribution between tokens, and a greater focus on the features and syntactic decisions captured by each individual symbol.",
"Our model is based on a parsing architecture that contains an encoder layer that uses a pretrained network and a chart-based decoder, as detailed in Kitaev and Klein (2018).",
"To ensure incrementality, the encoder for this incremental model uses GPT-2 as a base, which disallows a backwards flow of information from future tokens (Radford et al., 2019).",
"At the interface between the pre-trained encoder and subsequent parts of the model (which we refer to as the read-out network ), we introduce a dis-cretization step that collapses the continuous, high-dimensional vectors from the encoder network to a small inventory of discrete symbols.",
"The read-out network has access only to these discrete symbols and not to the original text of the input; in other words, the sequence of discrete symbols must encode all information necessary to represent the syntactic structure of the sentence.",
"We introduce an information bottleneck that limits the size of the discrete token vocabulary to as few as 32 distinct symbols per raw input token.",
"The decision to label each token with a single symbol is partially rooted in prior research providing evidence that syntactic decisions among human speakers adhere to the uniform information density hypothesis, thus each token may convey similar amounts of syntactic information (Levy and Jaeger, 2006).",
"Concretely, a learned projection matrix is first applied to the token-level representation vectors of GPT-2.",
"Each projected vector is then converted into a single discrete symbol via vector quantization (van den Oord et al., 2017).",
"The number of symbols is kept small; as such, only a few bits are needed to encode all symbols.",
"In comparison, the base architecture uses a 512-dimensional vector of 32-bit floating point numbers for each token.",
"We can obtain high parsing accuracy sending 5 bits per token, which is only 0.03% of the bits of the base architecture's token representations.",
"At around 8 bits per token, parsing performance approximately matches that of the base architecture.",
"After discretization, each symbol from the sequence is associated with a learned embedding, as specified in the vector quantization codebook.",
"These vectors are fed as an input to the bidirectional read-out network, which consists of Transformer layers and an MLP-based span classification layer that otherwise match the base architecture.",
"The output of the network is a chart of scores representing each possible constituent span of the sentence.",
"A tree is then efficiently generated through the CKY algorithm following the span scoring methods of Stern et al. (2017).",
"It should be noted that while the encoder is unidirectional, our read-out network is bidirectional.",
"The bidirectionality allows the network enough slack to learn a flexible mapping between the induced representation and standard constituency 3088 trees.",
"For example, the discrete symbol associated with a word may help determine a syntactic attachment that concerns previous words that have already been assigned their own symbols.",
"In practice, the behavior of the read-out network exhibits consistent patterns that we interpret in Section",
"5. Moreover, the main product of our method and the principle object of analysis in this paper is not the network itself but rather the sequence of discrete symbols, each of which encodes no knowledge of future context.",
"We train our models using a learning rate of 3e-5 for weights carried over from pre-training, a learning rate of 1e-4 for randomly initialized weights, and a batch size of 32.",
"In order to facilitate training, the first two epochs of training proceed without the use of vector quantization.",
"During this time, a streaming k-means algorithm (Ackermann et al., 2012) calculates the initial centroids to use for vector quantization.",
"Over the course of the third epoch, the model linearly interpolates between continuous and quantized representations, and uses only the quantized version from the fourth epoch until the end of training.",
"We found that cold-starting with randomly-initialized centroids performs worse, in the sense that some centroids would never be used or updated at any point during training.",
"We attribute this degradation to the fact that randomly sampled code vectors are a poor distributional fit for outputs arising from a pre-trained GPT-2 model.",
"We apply our approach to the labeled constituent trees of the English Penn Treebank (Marcus et al., 1993).",
"The final incremental model generated using this setup achieves a score of 94.97 F1 on the Penn Treebank WSJ test set.",
"This model uses only 8 bits per token (256 symbols) to define the discrete symbol set using a unidirectional pretrained model (GPT2-medium).",
"A comparable model (Kitaev et al., 2019) that combines the same pre-trained encoder with deep bidirectional processing achieves 95.10 F1.",
"This shows that our representation can induce parse trees with competitive accuracy.",
"In Table 1, we present an F1 score comparison that highlights the behavior of different syntactic representations with different choices of encoder.",
"When directly predicting either per-span label probabilities (following the span classification approach of Stern et al., 2017), or actions in the attach-juxtapose transition system (Yang and Deng, Encoder Type Bi ( ) Uni ( ) Representation BERT GPT-2 GPT-2 Span Classification 95.59 95.10 93.95 (Kitaev et al., 2019) Attach-Juxtapose 95.79 94.53 87.66 (Yang and Deng, 2020) Learned 95.55 94.97 (This work) Table 1: F1 on the WSJ test set for parsers using different syntactic representations and pre-trained encoders.",
"2020), failing to include bidirectional layers on top of a unidirectional GPT-2 incurs a strong accuracy penalty.",
"This is despite the fact that both systems can discard speculative attachment decisions.",
"In the case of the chart parser with representations that consist of label probabilities for each span, adding an additional word can cause a switch to a new analysis by way of the CKY decoding procedure.",
"In the case of the attach-juxtapose parser, the same can be achieved via the use of beam search.",
"Nevertheless, incrementally predicting either of these representations fails to leverage the full power of the pre-trained encoder.",
"The choice of GPT-2 rather than a stronger bidirectional model has a large effect on the performance on the Penn Treebank.",
"To give a more accurate comparison with other models, Table 1 also shows F1 scores for models based on BERT, with the recognition that no model with such a deeply bidirectional encoder can truly be referred to as incremental.",
"Our approach of inducing learned representations with vector quantization also performs well in this setting, validating the method.",
"Even higher scores are achievable by using stronger pre-trained models, different forms of bidirectional processing, and additional supervision in the form of dependency trees; Mrini et al. (2020) combine all of these elements to achieve 96.38 F1.",
"However, many of these techniques are either orthogonal to our work, or they cannot be borrowed into an incremental setting due to their focus on deeply bidirectional neural processing.",
"We further evaluate our approach in terms of the compactness of the produced representations.",
"To do this, we trained a number of models while varying the size of the symbol set.",
"For added comparison, we also trained models using a bidirectional pretrained encoder (BERT).",
"As a baseline, we also produced a model that assigns symbols through simple k-means clustering of single-word embeddings (Mikolov et al., 2013) rather than fine-tuned contextual models.",
"The average F1 score for each model across a range of tag set sizes is shown in Table",
"2. Note that while the numbers of possible tags are all powers of two, this is not a strict requirement of the model, and any positive integer may be used as the tag set size.",
"While our best-performing unidirectional model uses 8 bits per token, using as few as 5 bits per token (32 symbols) retains a performance of 93.72 F1 on the test set.",
"As a point of comparison, gold CCG supertags in the CCGbank (Hockenmaier and Steedman, 2007) training data have an entropy of 5.14 bits per word.",
"However, CCG decoding typically requires multiple analyses to be considered in parallel.",
"A better comparison, then, might be the entropy of top-k supertag predictions from a supertagging model.",
"We find that the trained model of Tian et al. (2020) has an entropy of 5.98 bits/word for its ranked top-2 predictions, 7.57 for top-3, and 9.03 for top-4.",
"Our method's best-performing setting of 8 bits per word is therefore at an entropy level similar to top-3 or top-4 predictions for a recent CCG supertagger.",
"Having achieved high F1 scores, we must next demonstrate that our representation is, in fact, incremental.",
"An incremental representation has meaningful syntactic information in each of its prefixes, and we can probe this by running our read-out network after each word in the sentence, as shown in Figure",
"2. The resulting trees involve mostly local changes from word to word, which shows that important information is not being deferred to the very end of a sentence.",
"It should be noted that our read-out network was never trained on anything but complete sentences.",
"Applying it to fragments will produce individual trees that may not be representative of the ambiguity present in the underlying representation.",
"For example, after the word on the read-out network outputs a prepositional phrase that initially appears to attach to the verb.",
"Depending on the label chosen for the next word, however, the final attachment can be to either the verb or the noun phrase.",
"Nevertheless, this approach allows us to probe the degree to which the representation encodes syntactic decisions immediately, versus deferring them to some later point.",
"For each span in the final tree, we can walk backwards through the partial readouts to find the furthest point when the span still appears in a readout; we call this the point in time that a span is finalized .",
"In Figure 2, the noun phrase The Council is finalized after the word Council, and the verb phrase is finalized after the word ap-proved.",
"For the purposes of identifying whether a span is the same across two different trees, we assume that a span is uniquely identified by its label, its starting position, and the position of the last word in the leftmost child of the span (or the position of the single word in the span, if it only covers one word).",
"The last of these we also refer to as the split point of the span.",
"Figure 3 shows that approximately half of all spans are finalized either immediately at or immediately after their split point.",
"The distribution has a tail that falls off roughly exponentially, as shown by the loosely straight-line decay on the log-linear plot.",
"The presence of this tail stands in contrast with the attach-juxtapose representation, where all attachments are determined immediately after a split point, and the only way to defer a decision past that point is to retain multiple analyses on something like a beam.",
"An extremely frequent phe-0 10 20 30 40 50 Number of words past split point 0.001 0.01 0.1 1 F i n a li z e d s p a n s ( f r a c t i o n o f t o t a l ) Figure 3: Our representation commits to (finalizes) the majority of spans within just a few words of their split point.",
"nomenon within the tail is when a phrase expands to be nested inside another phrase of the same type: sometimes this happens due to the nature of the constituency representation we're converting to, and sometimes it reflects actual syntactic ambiguity.",
"One example in the latter is shown in Figure 4, where either the NP or the S node must expand due to coordination.",
"Note how our representation can handle this situation without considering multiple candidate labelings, while speculative transition-based systems would not.",
"In our parsing scheme, each new token is assigned a single syntactic symbol based on all tokens up to the current.",
"The subsequent sequence of symbols then fully determines a constituency tree.",
"For different random initializations of our approach with the same set size, similar features are typically captured by the system.",
"Models using smaller sets of symbols tend to have the most variability in terms of feature distribution.",
"The entropy of several random initializations of these sets is shown in Figure",
"5. Entropy appears to roughly stabilize after a small number of training iterations.",
"At this point, the characteristics of each symbol also roughly stabilize.",
"The entropy of the distribution of symbols seems to increase linearly with the number of bits per representation, but does not reach a level that corresponds to uniform usage frequency for all symbols in the discrete inventory.",
"Due to the small size of our information bottleneck, we hypothesize that our symbols encode the most powerful features needed to produce an accurate constituent tree representable by the given bitrate.",
"Thus, by analyzing the features captured by differently sized symbol sets, we can deduce a rough hierarchy of distinct features that are relevant to the incremental parsing task.",
"Starting with a system using only 2 discrete symbols, we steadily increase the bit rate of the dictionary and manually inspect the representation to find interpretable token-level features.",
"Many of these are similarly found in other works investigating the linguistic features captured by the token representations of neural parsers (Gaddy et al., 2018; Li and Eisner, 2019).",
"What follows is the rough order in which several of these features appear:",
"1. Separation between noun phrases and verb phrases",
"2. Symbols representing a new determiner or noun phrase, and ending ongoing noun phrases",
"3. Special symbols for other simple parts of speech (adjectives, adverbs, questions, punctuation, etc.)",
"4. Indication of a token being in subordinate clauses or relative clauses",
"5. Multiple symbols per part of speech (often nouns, verbs, and prepositions) signifying different attachments",
"6. Indication of other specific and specialized structures, such as clauses which are the object of a verb phrase, or a noun within a relative clause",
"7. Other specific common language features, such as possessive markings, gerunds, tokens introducing new clauses, or adverbs that modify adjectives rather than verbs 5.4 Clause Separation To demonstrate the features learned and captured by these tags, consider a model using only 32 symbols.",
"Main, subordinate, and relative clauses are typically associated with different discrete symbols for the same parts of speech.",
"same words, but within different clause types.",
"In main clauses, subjects and verbs are assigned symbols 16 and",
"6. Subordinate clauses, however, tend to use alternate symbols 15 and 13 for subject nouns and verbs respectively, while relative clauses use 20 and 26.",
"This feature of our symbol set suggests that our tags capture structural context beyond the current word, and the features learned by these tags can have human-interpretable meanings upon analysis.",
"The structure of the final parse tree is interpolated from the series of discrete symbols resulting from the encoder network.",
"To analyze how syntactic decisions are encoded in our representation, we first attempted to train a modified PCFG based on Klein and Manning (2003), with the goal of replicating the behavior of our read-out network.",
"However, this system could only reach a performance around 76.18 F1 towards the reconstruction task, suggesting that the PCFG's assumptions of locality and sub-tree independence are not valid for our learned representations.",
"To better understand the mechanism by which our representations are capable of representing a wide range of syntactic structures, we focus specifically on cases with potential syntactic ambiguities.",
"Consider the minimal pair shown in Figure 7, where the predicted syntactic structure differs by only a single prepositional attachment.",
"This pair uses the same encoder model as the previous example, which has a maximum of 32 discrete symbols.",
"Due to the different symbols assigned to the prepositions, the read-out network attaches the prepositional phrase at a different height.",
"Not all prepositional attachments can be reliably determined based on only the words up to and including the preposition.",
"To avoid speculative behavior, the tag sequences must contain mechanisms for recording instances of ambiguity and then resolving them based on tokens further down in the string.",
"Figure 8 shows an example of how our representation handles such situations.",
"Running the read-out network for the prefix Lucas brought the groceries for produces a partial parse that attaches the preposition to the groceries.",
"However, the final token offers additional information that may influence the attachment location, suggesting that the symbol sequence up to the preposition does not eliminate either possible structure, but rather 3093 S VP NP PP for24 NP groceries7 the11 brought6 NP Lucas16 S VP NP PP NP him11 for24 NP groceries7 the11 brought6 NP Lucas16 S VPPP NP himself16 for24 NP groceries7 the11 brought 6 NP Lucas 16 Figure 8: Two possible sentences continuing from the prefix Lucas brought the groceries for where the final attachment height for the prepositional phrase is determined by the discrete symbol for the word following the preposition.",
"encodes the locations of other likely attachments.",
"The encoder's decision over whether to mark the final token as symbol 11 or 16 allows the final tree to have an attachment to the verb phrase, rather than adhering to the partial interpretation of targeting the noun phrase.",
"In this paper, we present an approach to inducing syntactic representations that associate each token in the input with a discrete symbol from an arbitrarily-sized vocabulary, where the representations can be predicted incrementally in a strictly append-only manner.",
"Our models achieve high F1 on the WSJ test set despite a steep information bottleneck limiting the information that can be associated with each token.",
"The token-level tags produced by our model encode relevant syntactic information suitable for the given bit rate, while the locations of these tags serve to concretely define the location at which syntactic decisions can be committed to in a speculation-free manner.",
"These systems can serve to improve our understanding of incremental parsing and sequential decision making, and the underlying computational methods may be useful in the analysis of other incremental contexts.",
"This research was supported by DARPA under the LwLL program / Grant No.",
"FA8750-19-1-0504."
] | [
"method",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"objective",
"result",
"method",
"method",
"other",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"method",
"result",
"method",
"result",
"other",
"other"
] |
[
"Conversational question answering aims to provide natural-language answers to users in information-seeking conversations.",
"Existing conversational QA benchmarks compare models with pre-collected human-human conversations, using ground-truth answers provided in conversational history.",
"It remains unclear whether we can rely on this static evaluation for model development and whether current systems can well generalize to real-world human-machine conversations.",
"In this work, we conduct the first large-scale human evaluation of state-of-the-art conversational QA systems, where human evaluators converse with models and judge the correctness of their answers.",
"We find that the distribution of human-machine conversations differs drastically from that of human-human conversations, and there is a disagreement between human and gold-history evaluation in terms of model ranking.",
"We further investigate how to improve automatic evaluations, and propose a question rewriting mechanism based on predicted history, which better correlates with human judgments.",
"Finally, we analyze the impact of various modeling strategies and discuss future directions towards building better conversational question answering systems.",
"1 1 Introduction Conversational question answering aims to build machines to answer questions in conversations and has the promise to revolutionize the way humans interact with machines for information seeking.",
"With recent development of large-scale datasets (Choi et al., 2018; Saeidi et al., 2018; Reddy et al., 2019; Campos et al., 2020), rapid progress has been made in better modeling of conversational QA systems.",
"Current conversational QA datasets are collected by crowdsourcing human-human conversations, * The first two authors contributed equally.",
"where the questioner asks questions about a specific topic, and the answerer provides answers based on an evidence passage and the conversational history.",
"When evaluating conversational QA systems, a set of held-out conversations are used for asking models questions in turn.",
"Since the evaluation builds on pre-collected conversations, the gold history of the conversation is always provided, regardless of models' actual predictions (Figure",
"1(b)).",
"Although current systems achieve near-human F1 scores on this static evaluation, it is questionable whether this can faithfully reflect models' true performance in real-world applications.",
"To what extent do human-machine conversations deviate from human-human conversations?",
"What will happen if models have no access to ground-truth answers in a conversation?",
"To answer these questions and better understand the performance of conversational QA systems, we carry out the first large-scale human evaluation with four state-of-the-art models on the QuAC dataset (Choi et al., 2018) by having human evaluators converse with the models and judge the correctness of their answers.",
"We collected 1,446 human-machine conversations in total, with 15,059 question-answer pairs.",
"Through careful analysis, we notice a significant distribution shift from human-human conversations and identify a clear inconsistency of model performance between current evaluation protocol and human judgements.",
"This finding motivates us to improve automatic evaluation such that it is better aligned with human evaluation.",
"Mandya et al. (2020); Siblini et al. (2021) identify a similar issue in gold-history evaluation and propose to use models' own predictions for automatic evaluation.",
"However, predicted-history evaluation poses another challenge: since all the questions have been collected beforehand, using predicted history will invalidate some of the questions because of changes in the conversational history (see Figure",
"1(c) for an example).",
"rewriting mechanism, which automatically detects and rewrites invalid questions with predicted history (Figure 4).",
"We use a coreference resolution model (Lee et al., 2018) to detect inconsistency of conference in question text conditioned on predicted history and gold history, and then rewrite those questions by substituting with correct mentions, so that the questions are resolvable in the predicted context.",
"Compared to predicted-history evaluation, we find that incorporating this rewriting mechanism aligns better with human evaluation.",
"Finally, we also investigate the impact of different modeling strategies based on human evaluation.",
"We find that both accurately detecting unanswerable questions and explicitly modeling question dependencies in conversations are crucial for model performance.",
"Equipped with all the insights, we discuss directions for conversational QA modeling.",
"We release our human evaluation dataset and hope that our findings can shed light on future development of better conversational QA systems.",
"Evaluation of conversational QA in real-world consists of three components: an evidence passage P , a (human) questioner H that has no access to P , 2 and a model M that has access to P .",
"The questioner asks questions about P and the model answers them based on P and the conversational history thus far (see an example in Figure",
"1(a)).",
"Formally, for the i -th turn, the human asks a ques-2 Existing conversational QA datasets make different assumptions: For example, QuAC (Choi et al., 2018) assumes no access but CoQA assumes the questioner to have access.",
"tion based on the previous conversation, Q i H ( Q 1 , A 1 , ..., Q i 1 , A i 1 ) , (1) and then the model answers it based on both the history and the passage, A i M ( P, Q 1 , A 1 , ..., Q i 1 , A i 1 , Q i ) , (2) where Q i and A i represent the question and the answer at the i -th turn.",
"If the question is unanswerable from P , we simply denote A i as CANNOT ANSWER .",
"The model M is then evaluated by the correctness of answers.",
"Evaluating conversational QA systems requires human in the loop and is hence expensive.",
"Instead, current benchmarks use automatic evaluation with gold history ( Auto-Gold ) and collect a set of human-human conversations for automatic evaluation.",
"For each passage, one annotator asks questions without seeing the passage, while the other annotator provides the answers.",
"Denote the collected questions and answers as Q i and A i .",
"In gold-history evaluation, the model is inquired with pre-collected questions Q i and the gold answers as history: A i M ( P, Q 1 , A 1 , ..., Q i 1 , A i 1 , Q i ) , (3) and we evaluate the model by comparing A i to A i (measured by word-level F1).",
"This process does not require human effort but cannot truly reflect the distribution of human-machine conversations, because unlike human questioners who may ask different questions based on different model predictions, this static process ignores model predictions and always asks the pre-collected question.",
"closer to real-world information-seeking conversations, where the questioner cannot see the evidence passage during the dataset collection.",
"It prevents the questioner asking questions that simply overlaps with the passage and encourages unanswerable questions.",
"QuAC also adopts extractive question answering that restricts the answer as a span of text, which is generally considered easier to evaluate.",
"For human evaluation and analysis, we choose the following four conversational QA models with different model architectures and training strategies:",
"BERT.",
"It is a simple BERT (Devlin et al., 2019) baseline which concatenates the previous two turns of question-answer pairs, the question, and the passage as the input and predicts the answer span.",
"3 This model is the same as the BERT + PHQA baseline in Qu et al. (2019a).",
"GraphFlow.",
"Chen et al. (2020) propose a recurrent graph neural network on top of BERT embeddings to model the dependencies between the question, the history and the passage.",
"ExCorD.",
"Kim et al. (2021) train a question rewriting model on CANARD (Elgohary et al., 2019) to generate context-independent questions, and then use both the original and the generated questions to train the QA model.",
"This model achieves the current state-of-the-art on QuAC (67.7% F1).",
"For all the models except BERT, we use the original implementations for a direct comparison.",
"We report their performance on both standard benchmark and our evaluation in Table 2. 3 Human Evaluation 3.1 Conversation collection In this section, we carry out a large-scale human evaluation with the four models discussed above.",
"We collect human-machine conversations using 100 passages from the QuAC development set on Amazon Mechanical Turk.",
"4 We also design a set 3 We use bert-base-uncased as the encoder.",
"4 We restrict the annotators from English-speaking countries, and those who have finished at least 1,000 HITS with an acceptance rate of > 95%.",
"The compensation rate for Amazon Mechanical Turk workers is calculated using $15/h.",
"of qualification questions to make sure that the annotators fully understand our annotation guideline.",
"For each model and each passage, we collect three conversations from three different annotators.",
"We collect each conversation in two steps: (1) The annotator has no access to the passage and asks questions.",
"The model extracts the answer span from the passage or returns CANNOT ANSWER in a human-machine conversation interface.",
"5 We provide the title, the section title, the background of the passage, and the first question from QuAC as a prompt to annotators.",
"Annotators are required to ask at least 8 and at most 12 questions.",
"We encourage context-dependent questions, but also allow open questions like What else is interesting? if asking a follow-up question is difficult.",
"(2) After the conversation ends, the annotator is shown the passage and asked to check whether the model predictions are correct or not.",
"We noticed that the annotators are biased when evaluating the correctness of answers.",
"For questions to which the model answered CANNOT ANSWER , annotators tend to mark the answer as incorrect without checking if the question is answerable.",
"Additionally, for answers with the correct types (e.g. a date as an answer to When was it?), annotators tend to mark it as correct without verifying it from the passage.",
"Therefore, we asked another group of annotators to verify question answerability and answer correctness.",
"For each collected conversation, we ask two additional annotators to validate the annotations.",
"First, each annotator reads the passage before seeing the conversation.",
"Then, the annotator sees the question (and question only) and selects whether the question is",
"(a) ungrammatical,",
"(b) unanswerable, or",
"(c) answerable.",
"If the annotator chooses answerable, the interface then reveals the answer and asks about its correctness.",
"If the answer is incorrect, the annotator selects the correct answer span from the passage.",
"We discard all questions that both annotators find ungrammatical and the correctness is taken as the majority of the 3 annotations.",
"In total, we collected 1,446 human-machine conversations and 15,059 question-answer pairs.",
"We release this collection as an important source that 5 We used ParlAI (Miller et al., 2017) to build the interface.",
"complements existing conversational QA datasets.",
"Numbers of conversations and question-answer pairs collected for each model are shown in Table 1. The data distribution of this collection is very different from the original QuAC dataset (human-human conversations): we see more open questions and unanswerable questions, due to less fluent conversation flow caused by model mistakes, and that models cannot provide feedback to questioner about whether an answer is worth following up like human answerers do (more analysis in 6.2).",
"Deciding the correctness of answers is challenging even for humans in some cases, especially when questions are short and ambiguous.",
"We measure annotators' agreement and calculate the Fleiss' Kappa (Fleiss, 1971) on the agreement between annotators in the validation phase.",
"We achieve = 0 .",
"598 (moderate agreement) of overall annotation agreement.",
"Focusing on answerability annotation, we have = 0 .",
"679 (substantial agreement).",
"We now compare the results from our human evaluation and gold-history (automatic) evaluation.",
"Note that the two sets of numbers are not directly comparable: (1) the human evaluation reports accuracy, while the automatic evaluation reports F1 scores; (2) the absolute numbers of human evaluation are much higher than those of automatic evaluations.",
"For example, for the BERT model, the human evaluation accuracy is 82.6% while the automatic evaluation F1 is only 63.2%.",
"The reason is that, in automatic evaluations, the gold answers cannot capture all possible correct answers to open-ended questions or questions with multiple answers; however, the human annotators can evaluate the correctness of answers easily.",
"Nevertheless, we can compare relative rankings between different models.",
"Current standard evaluation cannot reflect model performance in human-machine conversations: (1) Human evaluation and Auto-Gold rank BERT and GraphFlow differently; especially, GraphFlow performs much better in automatic evaluation, but worse in human evaluation.",
"(2) The gap between HAM and ExCorD is significant (F1 of 65.4% vs 67.7%) in the automatic evaluation but the two models perform similarly in human evaluation (ac-curacy of 87.8% vs 87.9%).",
"The inconsistency between human evaluation and gold-history evaluation suggests that we need better ways to evaluate and develop our conversational QA models.",
"When being deployed in realistic scenarios, the models would never have access to the ground truth (gold answers) in previous turns and are only exposed to the conversational history and the passage.",
"Intuitively, we can simply replace gold answers by the predicted answers of models and we name this as predicted-history evaluation ( Auto-Pred ).",
"Formally, the model makes predictions based on the questions and its own answers: A i M ( P, Q 1 , A 1 , ..., Q i 1 , A i 1 , Q i ) .",
"This evaluation has been suggested by several recent works (Mandya et al., 2020; Siblini et al., 2021), which reported a significant performance drop using predicted history.",
"We observe the same performance degradation, shown in Table 2. However, another issue naturally arises with predicted history: Q i s were written by the dataset annotators based on ( Q 1 , A 1 , ..., Q i 1 , A i 1 ), which 8077 Unresolved coreference (44.0%) Q 1 : What was Frenzal Rhomb's first song?",
"We examined 100 QuAC conversations with the best-performing model (ExCorD) and identified three categories of invalid questions caused by predicted history.",
"We find that 23% of the questions become invalid after using the predicted history.",
"We summarize the types of invalid questions as follows (see detailed examples in Figure 3): Unresolved coreference (44.0%).",
"The question becomes invalid for containing either a pronoun or a definite noun phrase that refers to an entity unresolvable without the gold history.",
"Incoherence (39.1%).",
"The question is incoherent with the conversation flow (e.g., mentioning an entity non-existent in predicted history).",
"While humans may still answer the question using the passage, this leads to an unnatural conversation and a train-test discrepancy for models.",
"Correct answer changed (16.9%).",
"The answer to this question with the predicted history changes from when it is based on the gold history.",
"We further analyze the reasons for the biggest unresolved coreference category and find that the model either gives an incorrect answer to the previous question (incorrect prediction, 39.8%), or the model predicts a different (yet correct) answer to What was the band's fi rst success album at the international level?",
"an open question (open question, 37.0%), or the model returns CANNOT ANSWER incorrectly (no prediction, 9.5%), or the gold answer is longer than prediction and the next question depends on the extra part (extra gold information, 13.6%).",
"Invalid questions result in compounding errors, which may further affect how the model interprets the following questions.",
"Among all the invalid question categories, unresolved coreference questions are the most critical ones.",
"They lead to incorrect interpretations of questions and hence wrong answers.",
"We propose to improve our evaluation by first detecting these questions using a state-of-the-art coreference resolution system (Lee et al., 2018) 6 , and then substituting them with either rewriting the questions in-place and replacing the questions with their context-independent counterparts.",
"Detecting invalid questions.",
"We make the as-sumption that if the coreference model resolves mentions in Q i differently between using gold history ( Q 1 , A 1 , ..., A i 1 , Q i ) and predicted history ( Q 1 , A 1 , ..., A i 1 , Q i ) , then Q i is identified as having an unresolved coreference issue.",
"6 We use the coreference model from AllenNLP (Gardner et al., 2018).",
"where BG is the background, S i and S i denote the inputs for gold and predicted history.",
"After the coreference model returns entity cluster information given S i and S i , we extract a list of entities E = { e 1 , ..., e | E | } and E = { e 1 , ..., e | E | } .",
"7 We say Q i is valid only if E = E , that is, | E | = | E | and e j = e j , e j E, assuming e j and e j have a shared mention in Q i .",
"We determine whether e j = e j by checking if F1 ( s j , s j ) > 0 , where s j and s j are the first mention of e j and e j respectively, and F1 is the word-level F1 score, i.e., e j = e j as long as their first mentions have word overlap.",
"The reason we take the F1 instead of exact match to check whether the entities are the same is stated in Appendix A. Question rewriting through entity substitution.",
"Our first strategy is to substitute the entity names in Q i with entities in E , if Q i is invalid.",
"The rewritten question, instead of the original one, will be used in the conversation history and fed into the model.",
"We denote this evaluation method as rewritten-question evaluation ( Auto-Rewrite ), and Figure 4 illustrates a concrete example.",
"To analyze how well Auto-Rewrite does in detecting and rewriting questions, we manually check 100 conversations of ExCorD from the QuAC development set.",
"We find that Auto-Rewrite can detect invalid questions with a precision of 72% and a recall of 72% (more detailed analysis in Appendix B).",
"An example of correctly detected and rewritten question is presented in Figure 4.",
"Question replacement using CANARD.",
"Another strategy is to replace the invalid questions with context-independent questions.",
"The CANARD 7 We are only interested in the entities mentioned in the current question Q i and we filter out named entities (e.g., the National Football League ) because they can be understood without coreference resolution.",
"dataset (Elgohary et al., 2019) provides such a resource, which contains human-rewritten context-independent version of QuAC's questions.",
"Recent works (Anantha et al., 2021; Elgohary et al., 2019) have proposed training sequence-to-sequence models on such dataset to rewrite questions; however, since the performance of the question-rewriting models is upper bounded by the human-rewritten version, we simply use CANARD for question replacement.",
"We denote this strategy as replaced-question evaluation ( Auto-Replace ).",
"Because collecting context-independent questions is expensive, Auto-Replace is limited to evaluating models on QuAC; it is also possible to be extended to other datasets by training a question rewriting model, as demonstrated in existing work.",
"In this section, we compare human evaluation with all the automatic evaluations we have introduced: gold-history (Auto-Gold), predicted-history (Auto-Pred), and our proposed Auto-Rewrite and Auto-Replace evaluations.",
"We first explain the metrics we use in the comparison (6.1) and then discuss the findings (6.2 and 6.3).",
"Model performance and rankings.",
"We first consider using model performance reported by different evaluation methods.",
"Considering numbers of automatic and human evaluations are not directly comparable, we also calculate models' rankings and compare whether the rankings are consistent between automatic and human evaluations.",
"Model performance is reported in Table 2. In human evaluation, GraphFlow < BERT < HAM ExCorD; in Auto-Gold, BERT < GraphFlow < HAM < ExCorD; in other automatic evaluations, GraphFlow < BERT < HAM < ExCorD.",
"Statistics of unanswerable questions.",
"Percentage of unanswerable questions is an important aspect in conversations.",
"Automatic evaluations using static datasets have a fixed number of unanswerable questions, while in human evaluation, the percentage of unanswerable questions asked by human annotators varies with different models.",
"The statistics of unanswerable questions is shown in Table 3. Pairwise agreement.",
"For a more fine-grained evaluation, we perform a passage-level comparison for every pair of models.",
"More specifically, for every single passage we use one automatic metric to decide whether model A outperforms model B (or vice versa) and examine the percentage of passages that the automatic metric agrees with human evaluation.",
"For example, if the pairwise agreement of BERT/ExCorD between human evaluation and Auto-Gold is 52% , it means that Auto-Gold and human evaluation agree on 52% passages in terms of which model is better.",
"Higher agreement means the automatic evaluation is closer to human evaluation.",
"Figure 5 shows the results of pairwise agreement.",
"We found that automatic evaluations have a significant distribution shift from human evaluation.",
"We draw this conclusion from the following points.",
"Human evaluation shows a much higher model performance than all automatic evaluations, as shown in Table 2. Two reasons may cause this large discrepancy:",
"(a) Many conversational QA questions have multiple possible answers, and it is hard for the static dataset in automatic evaluations to capture all the answers.",
"It is not an issue in human evaluation because all answers are judged by human evaluators.",
"(b) There are more unanswerable questions and open questions in human evaluation (reason discussed in the next paragraph), which are easierfor example, models are almost always correct when answering questions like What else is interesting?.",
"Human evaluation has a much higher unanswerable question rate, as shown in Table 3. The reason is that in human-human data collection, the answers are usually correct and the questioners can ask followup questions upon the high-quality conversation; in human-machine interactions, since the models can make mistakes, the conversation flow is less fluent and it is harder to have followup questions.",
"Thus, questioners chatting with models tend to ask more open or unanswerable questions.",
"All automatic evaluation methods have a pairwise agreement lower than 70% with human evaluation, as shown in Figure 2. This suggests that all automatic evaluations cannot faithfully reflect the model performance of human evaluation.",
"First, we can clearly see that among all automatic evaluations, Auto-Gold deviates the most from the human evaluation.",
"From Table 2, only Auto-Gold shows different rankings from human evaluation, while Auto-Pred, Auto-Rewrite, and Auto-Replace show consistent rankings to human judgments.",
"In Figure 2, we see that Auto-Gold has the lowest agreement with human evaluation; among others, Auto-Rewrite better agrees with human evaluation for most model pairs.",
"Surprisingly, Auto-Rewrite is even better than Auto-Replacewhich uses human-written context independent questionsin most cases.",
"After checking the Auto-Replace conversations, we found that human-written context independent questions are usually much longer than QuAC questions and introduce extra information 8080 Predicted unanswerable Q. Precision Recall B G H E B G H E B G H E Auto-Gold 27.1 21.5 27.1 28.3 56.8 62.3 57.1 57.9 68.1 59.3 68.4 72.5 Auto-Pred 27.8 13.8 28.6 28.9 50.0 53.9 52.3 53.3 61.4 33.0 66.1 68.2 Auto-Rewrite 27.3 13.1 25.1 26.0 48.6 55.0 52.4 53.9 65.7 35.7 65.1 69.4 Auto-Replace 27.5 12.9 25.2 25.7 48.6 54.2 52.1 53.8 66.1 34.7 64.9 68.4 Human 42.3 14.7 37.2 36.0 75.0 93.0 86.8 87.4 95.2 72.5 93.7 93.3 Table 4: The percentage of models' predicted unanswerable questions, and the precision and recall for detecting unanswerable questions in different evaluations.",
"into the context, which leads to out-of-domain challenges for conversational QA models (example in Appendix C).",
"It shows that our rewriting strategy can better reflect real-world performance of conversational QA systems.",
"However, Auto-Rewrite is not perfectwe see that when comparing G/E or G/H, Auto-Pred is better than Auto-Rewrite; in all model pairs, the agreement between human evaluation and Auto-Rewrite is still lower than 70%.",
"This calls for further effort in designing better automatic evaluation in the future.",
"With insights drawn from human evaluation and comparison with automatic evaluations, we discuss the impact of different modeling strategies, as well as future directions towards building better conversational question answering systems.",
"Modeling question dependencies on conversational context.",
"When we focus on answerable questions (Table 2), we notice that GraphFlow, HAM and ExCorD perform much better than BERT.",
"We compare the modeling differences of the four systems in Figure 6, and identify that all the three better systems explicitly model the question dependencies on the conversation history and the passage: both GraphFlow and HAM highlight repeated mentions in questions and conversation history by special embeddings (turn marker and PosHAE) and use attention mechanism to select the most relevant part from the context; ExCorD adopts a question rewriting module that generates context-independent questions given the history and passage.",
"All those designs help models better understand the question in a conversational context.",
"Figure 7 gives an example where GraphFlow, HAM and ExCorD resolved the question from long conversation history while BERT failed.",
"demonstrates models' performance in detecting unanswerable questions .",
"We notice that GraphFlow predicts much fewer unanswerable questions than the other three models, and has a high precision and a low recall in unanswerable detection.",
"This is because GraphFlow uses a separate network for predicting unanswerable questions, which is harder to calibrate, while the other models jointly predict unanswerable questions and answer spans.",
"This behavior has two effects:",
"(a) GraphFlow's overall performance is dragged down by its poor unanswerable detection result (Table 2).",
"(b) In human evaluation, annotators ask fewer unanswerable questions with GraphFlow (Table 3)when the model outputs more, regardless of correctness, the human questioner has a higher chance to ask passage-related followup questions.",
"Both suggest that how well the model detects unanswerable questions significantly affects its performance and the flow in human-machine conversations.",
"Optimizing towards the new testing protocols.",
"Most existing works on conversational QA modeling focus on optimizing towards Auto-Gold evaluation.",
"Since Auto-Gold has a large gap from the real-world evaluation, more efforts are needed in optimizing towards the human evaluation, or Auto-Rewrite, which better reflects human evaluation.",
"One potential direction is to improve models' robustness given noisy conversation history, which simulates the inaccurate history in real world that consists of models' own predictions.",
"In fact, prior works (Mandya et al., 2020; Siblini et al., 2021) that used predicted history in training showed that it benefits the models in predicted-history evaluation.",
"et al., 2018), CoQA (Reddy et al., 2019), and DoQA (Campos et al., 2020), as well as a few recent works focusing on conversational open-domain question answering (Adlakha et al., 2021; Anantha et al., 2021; Qu et al., 2020) Different from single-turn QA datasets (Rajpurkar et al., 2016), conversational QA requires the model to understand the question in the context of conversational history.",
"There have been many methods proposed to improve conversational QA performance (Ohsugi et al., 2019; Chen et al., 2020; Qu et al., 2019b; Kim et al., 2021) and significant improvements have been made on conversational QA benchmarks.",
"Besides text-based conversational QA tasks, there also exist conversational QA benchmarks that require external knowledge or other modalities (Saeidi et al., 2018; Saha et al., 2018; Guo et al., 2018; Das et al., 2017).",
"Only recently has it been noticed that the current method of evaluating conversational QA models is flawed.",
"Mandya et al. (2020); Siblini et al. (2021) point out that using gold answers in history is not consistent with real-world scenarios and propose to use predicted history for evaluation.",
"Different from prior works, in this paper, we conduct a large scale human evaluation to provide evidence for why gold-history evaluation is sub-optimal.",
"In addition, we point out that even predicted-history evaluation has issues with invalid questions, for which we propose rewriting questions to further mitigate the gap.",
"Automatic evaluation of dialogue systems.",
"Automatically evaluating dialogue systems is difficult due to the nature of conversations.",
"In recent years, the NLP community has cautiously re-evaluated and identified flaws in many popular automated evaluation strategies of dialogue systems (Liu et al., 2016; Sai et al., 2019), and have proposed new evaluation protocols to align more with human evaluation in a real-world setting: Huang et al. (2020); Ye et al. (2021) evaluate the coherence of the dialogue systems; Gupta et al. (2019) explore to use multiple references for evaluation; Mehri and Eskenazi (2020) propose an unsupervised and reference-free evaluation; Lowe et al. (2017); Tao et al. (2018); Ghazarian et al. (2019); Shimanaka et al. (2019); Sai et al. (2020) train models to predict the relatedness score between references and model outputs, which are shown to be better than BLEU (Papineni et al., 2002) or ROGUE (Lin, 2004).",
"In this work, we carry out the first large-scale human evaluation on conversational QA systems.",
"We show that current standard automatic evaluation with gold history cannot reflect models' performance in human evaluation, and that human-machine conversations have a large distribution shift from static conversational QA datasets of human-human conversations.",
"To tackle these problems, we propose to use predicted history with rewriting invalid questions for evaluation, which reduces the gap between automatic evaluations and real-world human evaluation.",
"Based on the insights from the human evaluation results, we also nalyze current conversational QA systems and identify promising directions for future development.",
"We thank Alexander Wettig and other members of the Princeton NLP group, and the anonymous reviewers for their valuable feedback.",
"This research is supported by a Graduate Fellowship at Princeton University and the James Mi *91 Research Innovation Fund for Data Science."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"result",
"objective",
"result",
"other",
"other"
] |
[
"We introduce a new method to tag Multiword Expressions (MWEs) using a linguistically interpretable language-independent deep learning architecture.",
"We specifically target discontinuity, an under-explored aspect that poses a significant challenge to computational treatment of MWEs.",
"Two neural architectures are explored: Graph Convolutional Network (GCN) and multi-head self-attention.",
"GCN leverages dependency parse information, and self-attention attends to long-range relations.",
"We finally propose a combined model that integrates complementary information from both, through a gating mechanism.",
"The experiments on a standard multilingual dataset for verbal MWEs show that our model outperforms the baselines not only in the case of discontinuous MWEs but also in overall F-score.",
"1 1 Introduction Multiword expressions (MWEs) are linguistic units composed of more than one word whose meanings cannot be fully determined by the semantics of their components (Sag et al., 2002; Baldwin and Kim, 2010).",
"As they are fraught with syntactic and semantic idiosyncrasies, their automatic identification remains a major challenge (Constant et al., 2017).",
"Occurrences of discontinuous MWEs are particularly elusive as they involve relationships between non-adjacent tokens (e.g. put one of the blue masks on ).",
"While some previous studies disregard discontinuous MWEs (Legrand and Collobert, 2016), others stress the importance of factoring them in (Schneider et al., 2014).",
"Using a CRF-based and a transition-based approach respectively, Moreau et al. (2018) and Al Saied et al. (2017) try to *The first two authors contributed equally.",
"capture discontinuous occurrences with help from dependency parse information.",
"Previously explored neural MWE identification models (Ghar-bieh et al., 2017) suffer from limitations in dealing with discontinuity, which can be attributed to their inherently sequential nature.",
"More sophisticated architectures are yet to be investigated (Constant et al., 2017).",
"Graph convolutional neural networks (GCNs) (Kipf and Welling, 2017) and attention-based neural sequence labeling (Tan et al., 2018) are methodologies suited for modeling non-adjacent relations and are hence adapted to MWE identification in this study.",
"Conventional GCN (Kipf and Welling, 2017) uses a global graph structure for the entire input.",
"We modify it such that GCN filters convolve nodes of dependency parse tree on a per-sentence basis.",
"Self-attention, on the other hand, learns representations by relating different parts of the same sequence.",
"Each position in a sequence is linked to any other position with O (1) operations, minimising maximum path (compared to RNN's O ( n ) ) which facilitates gradient flow and makes it theoretically well-suited for learning long-range dependencies (Vaswani et al., 2017).",
"The difference in the two approaches motivates our attempt to incorporate them into a hybrid model with an eye to exploiting their individual strengths.",
"Other studies that used related syntax-aware methods in sequence labeling include Marcheggiani and Titov (2017) and Strubell et al. (2018) where GCN and self-attention were separately applied to semantic role labelling.",
"Our contribution in this study, is to show for the first time, how GCNs can be successfully applied to MWE identification, especially to tackle discontinuous ones.",
"Furthermore, we propose a novel architecture that integrates GCN with self-attention outperforming state-of-the-art.",
"The resulting models not only prove superior to existing methods in terms of overall performance but also are more robust in handling cases with gaps.",
"To specifically target discontinuity, we explore two mechanisms both preceding a Bi-LSTM:",
"1) a GCN layer to act as a syntactic ngram detector,",
"2) an attention mechanism to learn long-range dependencies.",
"Standard convolutional filters act as sequential ngram detectors (Kim, 2014).",
"Such filters might prove inadequate in modeling complex language units like discontinuous MWEs.",
"One way to overcome this problem is to consider non-sequential relations by attending to syntactic information in parse trees through the application of GCNs.",
"GCN is defined as a directed multi-node graph G ( V, E ) where v i V and ( v i , r, v j ) E are entities (words) and edges (relations) respectively.",
"By defining a vector x v as the feature representation for the word v , the convolution equation in GCN can be defined as a non-linear activation function f and a filter W with a bias term b as: c = f ( (cid:88) i r ( v ) W x i + b ) (1) where r ( v ) shows all words in relation with the given word v in a sentence, and c represents the output of the convolution.",
"Following Kipf and Welling (2017) and Schlichtkrull et al. (2017), we represent graph relations using adjacency matrices as mask filters for inputs.",
"We derive associated words from the dependency parse tree of the target sentence.",
"Since we are dealing with a sequence labelling task, there is an adjacency matrix representing relations among words (as nodes of the dependency graph) for each sentence.",
"We define the sentence-level convolution operation with filter W s and bias b s as follows: C s = f ( W s XTA + b s ) (2) where X , A , and C are representation of words, adjacency matrix, and the convolution output, all at the level of sentence.",
"The above formalism considers only one relation type, while depending on the application, multiple relations can be defined.",
"Kipf and Welling (2017) construct separate adjacency matrices corresponding to each relation type and direction.",
"Given the variety of dependency relations in a parse tree (e.g. obj, nsubj, ad-vcl, conj, etc), and per-sentence adjacency matrices, we would end up with an over-parametrised model in a sequence labeling task.",
"In this work, we simply treat all relations equally, but consider only three types of relations:",
"1) the head to the dependents,",
"2) the dependents to the head, and",
"3) each word to itself (self-loops).",
"The final output is obtained by aggregating the outputs from the three relations.",
"Attention (Bahdanau et al., 2014) helps a model address the most relevant parts of a sequence through weighting.",
"As attention is designed to capture dependencies in a sequence regardless of distance, it is complementary to RNN or CNN models where longer distances pose a challenge.",
"In this work we employ multi-head self-attention with a weighting function based on scaled dot product which makes it fast and computationally efficient.",
"Based on the formulation of Transformer by Vaswani et al. (2017), in the encoding module an input vector x is mapped to three equally sized matrices K , Q , and V (representing key, query and value) and the output weight matrix is then computed as follows: Att ( Q, K, V ) = softmax ( QKT d ) V (3) The timing signal required for the self-attention to work is already contained in the preceding CNN layers alleviating the need for position encoding.",
"The overall scheme of the proposed model, composed of two parallel branches, is depicted in Figure 1.",
"We employ multi-channel CNNs as the step preceding self-attention.",
"One channel is comprised of two stacked 1D CNNs and the other is a single 1D CNN.",
"After concatenation and batch normalisation, a multi-head self attention mechanism is applied (Section 2.2).",
"Parallel to the self-attention branch, GCN learns a separate representation (Section 2.1).",
"Since the GCN layer retains important structural information and is sensitive to positional data from the syntax tree, we consider it as a position-based approach.",
"On the other hand, the self-attention layer J times K concat Q V Multi-head selfattention linear Transform Carry FFN Linear Bi-LSTM GCN Multi-channel CNNs W o v XTA v s s s Figure 1: A hybrid sequence labeling approach integrating GCN (o: output dimension; v: word vectors dimension; s: sentence length) and Self-Attention.",
"is intended to capture long-range dependencies in a sentence.",
"It relates elements of the same input through a similarity measure irrespective of their distance.",
"We therefore regard it as a content-based approach.",
"As these layers represent different methodologies, we seek to introduce a model that combines their complementary traits in our particular task.",
"Gating Mechanism .",
"Due to the considerable overlap between the GCN and self-attention layers, a naive concatenation introduces redundancy which significantly lowers the learning power of the model.",
"To effectively integrate the information, we design a simple gating mechanism using feed-forward highway layers (Srivastava et al., 2015) which learn to regulate information flow in consecutive training epochs.",
"Each highway layer consists of a Carry ( Cr ) and a Transform ( T r ) gate which decide how much information should pass or be modified.",
"For simplicity Cr is defined as 1 T r .",
"We apply a block of J stacked highway layers (the section inside the blue dotted square in Figure 1).",
"Each layer regulates its input x using the two gates and a feedforward layer H as follows: y = T r (cid:12) H + (1 T r ) (cid:12) x (4) where (cid:12) denotes the Hadamard product and T r is defined as ( W Tr x + b Tr ) .",
"We set b Tr to a negative number to reinforce carry behavior which helps the model learn temporal dependencies early in the training.",
"Our architecture bears some resemblance to Marcheggiani and Titov (2017) and Zhang et al. (2018) in its complementary view of GCN and BiLSTM.",
"However there are some important differences.",
"In these works, BiLSTM is applied prior to GCN in order to encode contextualised information and to enhance the teleportation capability of GCN.",
"Marcheggiani and Titov (2017) stack a few BiLSTM layers with the idea that the resulting representation would enable GCN to consider nodes that are multiple hops away in the input graph.",
"Zhang et al. (2018) use a similar encoder, however the model employs single BiLSTM and GCN layers, and the graph of relations is undirected.",
"In our work, we use pre-trained contextualised embeddings that already contain all the informative content about word order and disambiguation.",
"We put BiLSTM on top of GCN, in line with how CNNs are traditionally applied as feature generating front-ends to RNNs.",
"Furthermore, Marcheggiani and Titov (2017) use an edge-wise gating mechanism in order to down-weight uninformative syntactic dependencies.",
"This method can mitigate noise when parsing information is deemed noisy, however in Zhang et al. (2018) it caused performance to drop.",
"Given our low-resource setting, in this work we preferred not to potentially down-weight contribution of individual edges, therefore treating them equally.",
"We rely on gating as the last step when we combine GCN and self-attention.",
"Data .",
"We experiment with datasets from the shared task on automatic identification of verbal Multiword Expressions (Ramisch et al., 2018).",
"The datasets are tagged for different kinds of verbal MWEs including idioms, verb particle constructions, and light verb constructions among others.",
"We focus on annotated corpora of four languages: French (FR), German (DE), English (EN), and Persian (FA) due to their variety in size and proportion of discontinuous MWEs.",
"Tags in the datasets are converted to a variation of IOB which includes the tags B (beginning of MWEs), I (other components of MWEs), and O (tokens outside MWEs), with the addition of G for arbitrary tokens in between the MWE components e.g. make [B] important [ G ] decisions [ I ] .",
"ELMo .",
"In our experiments, we make use of ELMo embeddings (Peters et al., 2018) which are contextualised and token-based as opposed to All Discontinuous TokenMWE-based based MWE-based L model F F % P R F EN baseline 41.37 35.38 32 24.44 10.48 14.67 GCN-based 39.78 39.11 39.53 16.19 22.97 Att-based 33.33 31.79 46.88 14.29 21.90 H-combined 41.63 40.76 63.33 18.10 28.15 DE baseline 62.27 57.17 43 69.50 45.37 54.90 GCN-based 65.48 61.17 65.19 47.69 55.08 Att-based 61.20 58.19 67.86 43.98 53.37 H-combined 63.80 60.71 68.59 49.54 57.53 FR baseline 76.62 72.16 43 75.27 52.04 61.54 GCN-based 79.59 75.15 79.58 56.51 66.09 Att-based 78.21 74.23 71.49 60.59 65.59 H-combined 80.25 76.56 77.94 59.11 67.23 FA baseline 88.45 86.50 14 67.76 55.88 61.29 GCN-based 87.78 86.42 78.72 54.41 64.35 Att-based 87.55 84.20 62.32 63.24 62.77 H-combined 88.76 87.15 75.44 63.24 68.80 Table 1: Model performance (P, R and F) for development sets for all MWE and only discontinuous ones (%: proportion of discontinuous MWES) type-based word representations like word2vec or GLoVe where each word type is assigned a single vector.",
"Token-based embeddings better reflect the syntax and semantics of each word in its context compared to traditional type-based ones.",
"We use the implementation by Che et al. (2018) to train ELMo embeddings on our data.",
"Validation .",
"In the validation phase, we start with a strong baseline which is a CNN + Bi-LSTM model based on the top performing system in the VMWE shared task (Taslimipoor and Rohanian, 2018).",
"Our implemented baseline differs in that we employ ELMo rather than word2vec resulting in a significant improvement.",
"We perform hyper-parameter optimisation and make comparisons among our systems, including GCN + Bi-LSTM (GCN-based), CNN + attention + Bi-LSTM (Att-based), and their combination using a highway layer (H-combined) in Table 1.",
"Systems are evaluated using two types of precision, recall and F-score measures: strict MWE-based scores (every component of an MWE should be correctly tagged to be considered as true posi-tive), and token-based scores (a partial match between a predicted and a gold MWE would be considered as true positive).",
"We report results for all MWEs as well as discontinuous ones specifically.",
"According to Table 1, GCN-based outperforms Att-based and they both outperform the strong baseline in terms of MWE-based F-score in three out of four languages.",
"Combining GCN with attention using highway networks results in further improvements for EN, FR and FA.",
"The H-combined model consistently exceeds the baseline for all languages.",
"As can be seen in Table 1, GCN and H-combined models each show significant improvement with regard to discontinuous MWEs, regardless of the proportion of such expressions.",
"In Table 2 we show the superior performance (in terms of MWE-based F-score) of our top systems on the test data compared to the baseline and state-of-the-art systems, namely, ATILF-LLF (Al Saied et al., 2017) and SHOMA (Taslimipoor and Rohanian, 2018).",
"GCN works the best for discontinuous MWEs in EN and FA, while H-combined outperforms based on results for all MWEs except for FA.",
"The findings are further discussed in Section 5.",
"The overall results confirm our assumption that a hybrid architecture can mitigate errors of individual models and bolster their strengths.",
"To demonstrate the effectiveness of the models in detecting discontinuous MWEs, in Figure 2 we plot their performance for FR and EN given a range of different gap sizes.",
"As an ablation study, we show the results for the baseline, GCN-based, Att-based only, as well as H-combined models.",
"GCN and Att-based models each individually outperform the baseline, and the combined model clearly improves the results further.",
"The example in Figure 3 taken from the English dataset demonstrates the way GCN considers relations between non-adjacent tokens in the sentence.",
"Our baseline is prone to disregarding these links.",
"Similar cases captured by both GCN and H-combined (but not the baseline) are take a final look , picked one up , and cut yourself off .",
"In more complicated constructs where syntactic dependencies might not directly link all constituents, GCN alone is not always conducive to optimal performance.",
"In Figure 4, the French sentence is in the passive form and MWE parts are separated by 5 tokens.",
"This is an MWE skipped by GCN but entirely identified by the H-combined model.",
"It is important to note that model performance is sensitive to factors such as percentage of seen expressions and variability of MWEs (Pasquer et al., 2018).",
"In FA for instance, 67% of the MWEs in the test set are seen at training time, making them easy to be captured by the baseline (Taslimipoor et al., 2018).",
"Furthermore, only 21% of MWEs in FA and 15% in EN are discontinuous as opposed to 44% in FR and 38% in DE.",
"In this case, a sequential model can already learn the patterns with high accuracy and the potential of a GCN and self-attention is not fully exploited.",
"Also in DE, a sizable portion of MWEs are verbal idioms (VIDs) which are known for their lexico-syntactic fixedness and prevalence of tokens that lack a standalone meaning and occur only in a limited number of contexts (also known as cranberry words).",
"Furthermore, MWEs in the Persian dataset are all Light Verb Constructions (LVCs), which can be modelled using lexical semantic templates (Megerdoomian, 2004).",
"For such MWEs, our models compete with strong sequential baselines.",
"In this paper, we introduced the application of GCN and attention mechanism to identification of verbal MWEs and finally proposed and tested a hybrid approach integrating both models.",
"Our particular point of interest is discontinuity in MWEs which is an under-explored area.",
"All the individual and combined models outperform state-of-the-art in all considered criteria.",
"In future, we will further develop our system using structured attention (Kim et al., 2017) and try to improve the accuracy of parsers in multi-tasking scenarios."
] | [
"objective",
"objective",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain"
] |
[
"Adversarial examples expose the vulnerabilities of natural language processing (NLP) models, and can be used to evaluate and improve their robustness.",
"Existing techniques of generating such examples are typically driven by local heuristic rules that are agnostic to the context, often resulting in unnatural and ungrammatical outputs.",
"This paper presents CLARE, a C ontextua L ized A dversa R ial E xample generation model that produces fluent and grammatical outputs through a mask-then-infill procedure.",
"CLARE builds on a pre-trained masked language model and modifies the inputs in a context-aware manner.",
"We propose three contextualized perturbations, Replace , Insert and Merge , that allow for generating outputs of varied lengths.",
"CLARE can flexibly combine these perturbations and apply them at any position in the inputs, and is thus able to attack the victim model more effectively with fewer edits.",
"Extensive experiments and human evaluation demonstrate that CLARE outperforms the baselines in terms of attack success rate, textual similarity, fluency and grammaticality.",
"Adversarial example generation for natural language processing (NLP) tasks aims to perturb input text to trigger errors in machine learning models, while keeping the output close to the original.",
"Besides exposing system vulnerabilities and helping improve their robustness and security (Zhao et al., 2018; Wallace et al., 2019; Cheng et al., 2019; Jia et al., 2019, inter alia ), adversarial examples are also used to analyze and interpret the models' decisions (Jia and Liang, 2017; Ribeiro et al., 2018).",
"Generating adversarial examples for NLP tasks can be challenging, in part due to the discrete nature of natural language text.",
"Most recent efforts have explored heuristic rules, such as replacing tokens with their synonyms (Samanta and Mehta, 2017; Super ant colony hits Australia {Coast, , } .",
"Liang et al., 2019; Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020, inter alia ).",
"Despite some empirical success, rule-based methods are agnostic to context, limiting their ability to produce natural, fluent, and grammatical outputs (Wang et al., 2019b; Kurita et al., 2020, inter alia ).",
"This work presents CLARE, a C ontextua L ized A dversa R ial E xample generation model for text.",
"CLARE perturbs the input with a mask-then-infill procedure: it first detects the vulnerabilities of a model and deploys masks to the inputs to indicate missing text, then plugs in an alternative using a pretrained masked language model (e.g., RoBERTa; Liu et al., 2019).",
"CLARE features three contextualized perturbations: Replace , Insert and Merge , which respectively replace a token, insert a new one, and merge a bigram (Figure 1).",
"As a result, it can generate outputs of varied lengths, in contrast to token replacement based methods that are limited to outputs of the same lengths as the inputs (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020).",
"Further, CLARE searches over a wider range of attack strategies, and is thus able to attack the victim model more effectively with fewer edits.",
"Building on a masked language model, CLARE maximally preserves textual similarity, fluency, and grammaticality of the outputs.",
"We evaluate CLARE on text classification, natural language inference, and sentence paraphrase tasks, by attacking finetuned BERT models (De-vlin et al., 2019).",
"Extensive experiments and human evaluation results show that CLARE outperforms baselines in terms of attack success rate, textual similarity, fluency, and grammaticality, and strikes a better balance between attack success rate and preserving input-output similarity.",
"Our analysis further suggests that the CLARE can be used to improve the robustness of the downstream models, and improve their accuracy when the available training data is limited.",
"We release our code and models at https://github.com/ cookielee77/CLARE .",
"At a high level, CLARE applies a sequence of contextualized perturbation actions to the input.",
"Each can be seen as a local mask-then-infill procedure: it first applies a mask to the input around a given position, and then fills it in using a pretrained masked language model (2.1).",
"To produce the output, CLARE scores and descendingly ranks the actions, which are then iteratively applied to the input (2.2).",
"We begin with a brief background review and laying out of necessary notation.",
"Background.",
"Adversarial example generation centers around a victim model f , which we assume is a text classifier.",
"We focus on the black-box setting, allowing access to f 's outputs but not its configurations such as parameters.",
"Given an input sequence x = x 1 x 2 . . . x n and its label y (assume f ( x ) = y ), an adversarial example x (cid:48) is supposed to modify x to trigger an error in the victim model: f ( x (cid:48) ) (cid:54) = f ( x ) .",
"At the same time, textual modifications should be minimal, such that x (cid:48) is close to x and the human predictions on x (cid:48) stay the same.",
"1 This is achieved by requiring the similarity be-1 In computer vision applications, minor perturbations to continuous pixels can be barely perceptible to humans, thus it can be hard for one to distinguish x and x (cid:48) (Goodfellow et al., 2015).",
"It is not the case for text, however, since changes to the discrete tokens are more likely to be noticed by humans.",
"tween x (cid:48) and x to be larger than a threshold: sim( x (cid:48) , x ) > (cid:96) .",
"A common choice of sim( , ) is to encode sentences using neural networks, and calculate their cosine similarity in the embedding space (Jin et al., 2020).",
"At a given position of the input sequence, CLARE can execute three perturbation actions: Replace , Insert , and Merge , which we introduce in this section.",
"These apply masks at the given position with different strategies, and then fill in the missing text based on the unmasked context.",
"Replace : A Replace action substitutes the token at a given position i with an alternative (e.g., changing fantastic to amazing in The movie is fantastic .).",
"It first replaces x i with a mask, and then selects a token z from a candidate set Z to fill in: (cid:101) x = x 1 . . . x i 1 [MASK ] x i +1 . . . x n , replace ( x , i ) = x 1 . . . x i 1 z x i +1 . . . x n .",
"For clarity, we denote replace ( x , i ) by (cid:101) x z .",
"To produce an adversarial example, z should fit into the unmasked context; (cid:101) x z should be similar to x ; (cid:101) x z should trigger an error in f .",
"These can be achieved by selecting a z such that z receives a high probability from a masked language model: p MLM ( z | (cid:101) x ) > k ; (cid:101) x z is similar to x : sim( x , (cid:101) x z ) > (cid:96) ; f predicts low probability for the gold label given (cid:101) x z , i.e., p f ( y | (cid:101) x z ) is small.",
"p MLM denotes a pretrained masked language model (e.g., RoBERTa; Liu et al., 2019).",
"Using higher k , (cid:96) thresholds produces outputs that are more fluent and closer to the original.",
"However, this can undermine the success rate of the attack.",
"We choose k , (cid:96) to trade-off between these two aspects.",
"2 The first two requirements can be met by the construction of the candidate set: Z = (cid:8) z (cid:48) V | p MLM ( z (cid:48) | (cid:101) x ) > k, sim( x , (cid:101) x z (cid:48) ) > (cid:96) (cid:9) .",
"V is the vocabulary of the masked language model.",
"To meet the third, we select from Z the token that, if filled in, will cause most confusion to f : z = arg min z (cid:48) Z p f ( y | (cid:101) x z (cid:48) ) .",
"2 k and (cid:96) are empirically set as 5 10 3 and 0 .",
"7 , respectively.",
"This also reduces the computation overhead: in our experiments |Z| is 42 on average, much smaller than the vocabulary size ( |V| = 50 , 265 ).",
"The Insert and Merge actions differ from Replace in terms of masking strategies.",
"The alternative token z is selected analogously to that in a Replace action.",
"Insert : This aims to add extra information to the input (e.g., changing I recommend ... to I highly recommend ...).",
"It inserts a mask after x i and then fills it.",
"Slightly overloading the notations, (cid:101) x = x 1 . . . x i [MASK ] x i +1 . . . x n , insert ( x , i ) = x 1 . . . x i z x i +1 . . . x n .",
"z can be the same as one of the masked tokens (e.g., masking out New York and then filling inYork).",
"This can be seen as deleting a token from the input.",
"For Insert and Merge , z is chosen in the same manner as replace action.",
"3 In sum, at each position i of an input sequence, CLARE first: ( i ) replaces x i with a mask; ( ii ) or inserts a mask after x i ; ( iii ) or merges x i x i +1 into a mask.",
"Then a set of candidate tokens is constructed with a masked language model and a textual similarity function; the token minimizing the gold label's probability is chosen as the alternative token.",
"The combination of these three operations enables conversion between any two sequences.",
"CLARE first constructs the local actions for all positions in parallel, i.e., the actions at position i do not affect those at other positions.",
"Then, to produce the adversarial example, CLARE gathers the local actions and selects an order to execute them.",
"Given an input pair ( x , y ) , let n denote the length of x .",
"CLARE chooses from 3 n actions to produce the output: 3 actions for each position, assuming the candidate token sets are not empty.",
"We aim to generate an adversarial example with minimum modifications to the input.",
"To achieve this, we iteratively apply the actions, and first select those 3 A perturbation will not be considered if its candidate token set is empty.",
"Each action is associated with a score, measuring how likely it can confuse f : denote by a ( x ) the output of applying action a to x .",
"The score is then the negative probability of predicting the gold label from f , using a ( x ) as the input: s ( x ,y ) ( a ) = p f (cid:0) y | a ( x ) (cid:1) .",
"Only one of the three actions can be applied at each position, and we select the one with the highest score.",
"This constraint aims to avoid multiple modifications around the same position, e.g., merging New York into Seattle and then replacing it with Boston.",
"Actions are iteratively applied to the input, until an adversarial example is found or a limit of actions T is reached.",
"Each step selects the highest-scoring action from the remaining ones.",
"Algorithm 1 summarizes the above procedure.",
"4 Discussion.",
"A key technique of CLARE is the local mask-then-infill perturbation.",
"Compared with existing context-agnostic replacement approaches (Alzantot et al., 2018; Jin et al., 2020; Ren et al., 2019, inter alia ), contextualized infilling produces more fluent and grammatical outputs.",
"Generating adversarial examples with masked language models is also explored by concurrent work 4 Insert and Merge actions change the text length.",
"BERTAttack (Li et al., 2020) and BAE (Garg and Ramakrishnan, 2020).",
"5 BERTAttack only replaces tokens and thus can only produce outputs of the same lengths as the inputs.",
"This is analogous with a CLARE model with the Replace action only.",
"BAE entangles replacing and inserting tokens: it inserts only at positions neighboring a replaced token, limiting its attacking capability.",
"Departing from both, CLARE uses three different perturbations ( Replace , Insert and Merge ), each allowing efficient attacking against any position of the input, and can produce outputs of varied lengths.",
"As we will show in the experiments (3.3), CLARE outperforms both these methods.",
"When selecting the attack positions, neither BERTAttack or BAE takes into account the tokens to be infilled, whereas CLARE does.",
"This results in better adversarial attack performance according to our ablation study (4.1).",
"CLARE demonstrates the advantage of using RoBERTa over BERT, which was used in the concurent works (4.1).",
"We evaluate CLARE on text classification, natural language inference, and sentence paraphrase tasks.",
"We begin by describing the implementation details of CLARE and the baselines (3.1).",
"3.2 introduces the experimental datasets and the evaluation metrics; the results are summarized in 3.3.",
"We experiment with a distilled version of RoBERTa (RoBERTa distill ; Sanh et al., 2019) as the masked language model for contextualized infilling.",
"We also compare to base sized RoBERTa (RoBERTa base ; Liu et al., 2019) and base sized BERT (BERT base ; Devlin et al., 2019) in the ablation study (4.1).",
"The similarity function builds on the universal sentence encoder (USE; Cer et al., 2018).",
"The victim model is an MLP classifier on top of BERT base .",
"It takes as input the first token's contextualized representation.",
"We finetune BERT when training the victim model.",
"Baselines.",
"We compare CLARE with recent state-of-the-art word-level black-box adversarial 5 Both Li et al. (2020) and Garg and Ramakrishnan (2020) are published concurrently to an initial report of this work.",
"TextFooler : a state-of-the-art model by Jin et al. (2020).",
"This replaces tokens with their synonyms derived from counter-fitting word embeddings (Mrkic et al., 2016), and uses the same text similarity function as our work.",
"TextFooler+LM : an improved variant of TextFooler we implemented based on Alzantot et al. (2018) and Cheng et al. (2019).",
"This inherits token replacement from TextFooler, but uses an additional small sized GPT-2 language model (Radford et al., 2019) to filter out those candidate tokens that do not fit in the context with calculated perplexity.",
"BERTAttack : a mask-then-infill approach by Li et al. (2020).",
"It greedily replaces tokens with the predictions from BERT.",
"BAE is not listed as it has a similar performance as BERTAttack (Garg and Ramakrishnan, 2020).",
"We use the open source implementation of the above baselines provided by the authors.",
"More details are included in Appendix A.1.",
"Datasets.",
"We evaluate CLARE with the following datasets: Yelp Reviews (Zhang et al., 2015): a binary sentiment classification dataset based on restaurant reviews.",
"AG News (Zhang et al., 2015): a collection of news articles with four categories: World , Sports , Business and Science & Technology .",
"MNLI (Williams et al., 2018): a natural language inference dataset.",
"Each instance consists of a premise-hypothesis pair, and the model is supposed to determine the relation between them from a label set of entailment , neutral , and contradiction .",
"It covers text from a variety of domains.",
"6 We only examine the performance on the matched set, since the mismatched set is easier to attack.",
"QNLI (Wang et al., 2019a): a binary classification dataset converted from the Stanford question answering dataset (Rajpurkar et al., 2016).",
"The task is to determine whether the context contains the answer to a question.",
"It is mainly based on English Wikipedia articles.",
"Table 1 summarizes some statistics of the datasets.",
"In addition to the above four datasets, we experiment with DBpedia ontology dataset (Zhang et al., 2015), Stanford sentiment treebank (Socher et al., 2013), Microsoft Research Paraphrase Corpus (Dolan and Brockett, 2005), and Quora Question Pairs from the GLUE benchmark.",
"The results on these datasets are summarized in Appendix A.2.",
"Following previous practice (Alzantot et al., 2018), we fine-tune CLARE on training data, and evaluate with 1,000 randomly sampled test instances of lengths 100 .",
"In the sentence-pair tasks (e.g., MNLI, QNLI), we attack the longer sentence excluding the tokens that appear in both.",
"Evaluation metrics.",
"We follow previous works (Jin et al., 2020; Morris et al., 2020a), and evaluate the models with the following automatic metrics: Attack success rate (A-rate) : the percentage of adversarial examples that can successfully attack the victim model.",
"Modification rate (Mod) : the percentage of modified tokens.",
"Each Replace or Insert action accounts for one token modified; a Merge action is considered modifying one token if one of the two merged tokens is kept (e.g., merging bigram ab into a ), and two otherwise (e.g., merging bigram ab into c ).",
"Perplexity (PPL) : a metric used to evaluate the fluency of adversaries (Kann et al., 2018; Zang et al., 2020).",
"The perplexity is calculated using small sized GPT-2 with a 50K-sized vocabulary (Radford et al., 2019).",
"Grammar error (GErr) : the absolute number of increased grammatical errors in the successful adversarial example, compared to the original text.",
"Following (Zang et al., 2020; Morris et al., 2020b), we calculate this by the LanguageTool (Naber et al., 2003).",
"7 Textual similarity (Sim) : the cosine similarity between the input and its adversary.",
"Following (Jin et al., 2020; Morris et al., 2020b), we calculate this using the universal sentence encoder (USE; Cer et al., 2018).",
"The last four metrics are averaged across those adversarial examples that successfully attack the victim model.",
"Table 2 summarizes the results.",
"Overall CLARE achieves the best performance on all metrics consistently across different datasets.",
"Notably, CLARE outperforms BERTAttack, the strongest baseline, by a more than 5.4% attack success rate with fewer average modifications to the text.",
"We attribute this to CLARE's flexible attack strategies obtained by combining three different perturbations at any position.",
"Interestingly, using contextualized embeddings does not appear to guarantee better fluency: 7 https://www.languagetool.org/ 20% 40% 60% 80% 100% Attack Success Rate 0.5 0.6 0.7 0.8 T e x t u a l S i m il a r i t y CLAREBERTAttackTextFoolerTextFooler+LM 20% 40% 60% 80% 100% Attack Success Rate 50 150 250 350 P e r p l e x i t y CLAREBERTAttackTextFoolerTextFooler+LM Figure 2: Left : Attack success rate and textual similarity trade-off curves ( both higher the better ).",
"despite fewer modifications to the text, BERTAttack achieves similar perplexity to language-model-augmented TextFooler on three out of the four datasets, while CLARE consistently outperforms both.",
"In terms of grammatical errors, contextualized models (CLARE and BERTAttack) are substantially better than the others, with CLARE performing the best.",
"In terms of similarity, CLARE outperforms all baselines by more than 0.02, a larger gap than BERTAttack's improvements over TextFooler variants.",
"We observe similar trends on other datasets in Appendix A.2.",
"Figure 2 compares trade-off curves between attack success rate and textual similarity.",
"We tune the thresholds for constructing the candidate token sets, and plot textual similarity against the attack success rate.",
"CLARE strikes the best balance, showing a clear advantage in success rate with least similarity drop.",
"We observe similar trends for attack success rate and perplexity trade off.",
"Human evaluation.",
"We further conduct human evaluation on the AG News dataset.",
"We randomly sample 300 instances which both CLARE and TextFooler successfully attack.",
"For each input, we pair the adversarial examples from the two models, and present them to crowd-sourced judges along with the original input and the gold label.",
"We ask them which they prefer with a neutral option in terms of (1) having a meaning that is closer to the original input (similarity), and (2) being more fluent and grammatical (fluency and grammatical-ity).",
"Additionally, we ask the judges to annotate adversarial examples, and compare their annotations against the gold labels (label consistency).",
"We collect 5 responses for each pair on every eval-Metric CLARE Neutral TextFooler Similarity 56.1 2 .",
"uated aspect.",
"Further details are in Appendix A.3.",
"As shown in Table 3, CLARE has a significant advantage over TextFooler: in terms of similarity 56% responses prefer CLARE, while 16% prefer TextFooler.",
"The trend is similar for fluency & grammaticality (42% vs. 9%).",
"This observation is consistent with results from automatic metrics.",
"On label consistency, CLARE slightly underperforms TextFooler at 68% with a 95% condidence interval (CI) (66% , 70%) , versus 70% with a 95% CI (68% , 73%) .",
"We attribute this to an inherent overlap of some categories in the AG News dataset, e.g., Science & Technology and Business , as evidenced by a 71% label consistency for original inputs.",
"This section first conducts an ablation study (4.1).",
"We then explore CLARE's potential to be used to improve downstream models' robustness and accuracy in 4.3.",
"In 4.2, we empirically observe that CLARE tends to attack noun and noun phrases.",
"We ablate each component of CLARE to study its effectiveness.",
"We evaluate on the 1,000 randomly selected AG news instances (3.2).",
"The results are summarized in Table 5.",
"We first investigate the performance of three perturbations when applied individually.",
"Among three editing strategies, using INSERTONLY achieves the best performance, with REPLACEONLY coming a close second.",
"MERGEONLY underperforms the other two, partly because the attacks are restricted to bigram noun phrases (3.1).",
"Combining all three perturbations, CLARE achieves the best performance with the least modifications.",
"AG (Sci&Tech) Sprint Corp. is in talks with Qualcomm Inc. about using a network the chipmaker is building to deliver live television to Sprint mobile phone customers.",
"TextFooler(Business) Sprint Corps .",
"is in talks with Qualcomm Inc. about operated a network the chipmaker is consolidation to doing viva television to Sprint mobile phone customers.",
"CLARE (Business) Sprint Corp. is in talks with Qualcomm Inc. about using a network Qualcomm is building to deliver cable television to Sprint mobile phone customers.",
"MNLI (Neutral) Premise : Let me try it.",
"She began snapping her fingers and saying the word eagerly, but nothing happened.",
"Hypothesis : She became frustrated when the spell didn't work.",
"TextFooler(Contra-diction) Premise : Authorisation me attempting it.",
"She triggered flapping her pinkies and said the word eagerly, but nothing arisen .",
"Hypothesis : She became frustrated when the spell didn't work.",
"CLARE (Contra-diction) Premise : Let me try it.",
"She began snapping her fingers and saying the word eagerly, but nothing unexpected happened.",
"Hypothesis : She became frustrated when the spell didn't work.",
"To examine the efficiency of attacking order, we compare REPLACEONLY against BERTAttack.",
"Notably, REPLACEONLY outperforms BERTAttack across the board.",
"This is presumably because BERTAttack does not take into account the tokens to be infilled when selecting the attack positions.",
"We now turn to the two constraints imposed when constructing the candidate token set.",
"Perhaps not surprisingly, ablating the textual similarity constraint ( w/o sim > l ) decreases textual similarity performance, but increases other aspects.",
"Ablating the masked language model yields a better success rate, but much worse perplexity, grammaticality, and textual similarity.",
"Finally, we compare CLARE implemented with different masked language models.",
"Table 6 summarizes the results.",
"Overall, distilled RoBERTa achieves the fastest speed without losing performance.",
"Since the victim model is based on BERT, we conjecture that it is less efficient to attack a model using its own information.",
"In this section, we break down the adversarial attacks by part-of-speech (POS) tags in AG News dataset.",
"We find that most of the adversarial attacks happen to nouns or noun phrases.",
"Presumably, in many topic classification datasets, the prediction heavily relies on some characteristic noun words/phrases.",
"As shown in Table 7, 64% of the Replace actions are applied to nouns.",
"Insert actions tend to insert tokens into noun phrase bigram: two of the most frequent POS bigrams are noun phrases.",
"In fact, around 48% of the Insert actions are applied to noun phrases.",
"This also justifies our choice of only applying Merge to noun phrases.",
"This section explores CLARE's potential in improving downstream models' accuracy and robustness.",
"Following Tsipras et al. (2018), we use CLARE to generate adversarial examples for AG news training instances, and include them as additional training data.",
"We consider two settings: training with (1) full training data and full adversarial data and (2) 10% randomly-sampled training data and its adversarial data, to simulate the low-resource scenario.",
"For both settings, we compare a BERT-based MLP classifier and a TextCNN (Kim, 2014) classifier without any pretrained embedding.",
"shown in Table 8, when the full training data is available, adversarial training slightly decreases the test accuracy by 0.2% and 0.5% respectively.",
"This aligns with previous observations (Jia et al., 2019).",
"Interestingly, in the low-data scenario with adversarial training, the BERT-based classifier has no accuracy drop, and TextCNN achieves a 2.0% absolute improvement.",
"This suggests that a model with less capacity can benefit more from silver data.",
"Does adversarial training help the models defend against adversarial attacks?",
"To evaluate this, we use CLARE to attack classifiers trained with and without adversarial examples.",
"9 A higher success rate and fewer modifications indicate a victim classifier is more vulnerable to adversarial attacks.",
"As shown in Table 8, in 3 out of the 4 cases, adversarial training helps to decrease the attack success rate by more than 10.3%, and to increase the number of modifications needed by more than 0.8.",
"The only exception is the TextCNN model trained with 10% data.",
"A possible reason can be that it is trained with little data and thus generalizes less well.",
"Textual adversarial attack.",
"An increasing amount of effort is being devoted to generating better textual adversarial examples with various 9 In preliminary experiments, we found that it is more diffi-cult to use other models to attack a victim model trained with the adversarial examples generated by CLARE, than to use CLARE itself.",
"attack models.",
"Character-based models (Liang et al., 2019; Ebrahimi et al., 2018; Li et al., 2018; Gao et al., 2018, inter alia ) use misspellings to attack the victim systems; however, these attacks can often be defended by a spell checker (Pruthi et al., 2019; Zhou et al., 2019b; Jones et al., 2020).",
"Many sentence-level models (Iyyer et al., 2018; Wang et al., 2020; Zou et al., 2020, inter alia ) have been developed to introduce more sophisticated token/phrase perturbations.",
"These, however, generally have difficulty maintaining semantic similarity with original inputs (Zhang et al., 2020a).",
"Recent word-level models explore synonym substitution rules to enhance semantic meaning preservation (Alzantot et al., 2018; Jin et al., 2020; Ren et al., 2019; Zhang et al., 2019; Zang et al., 2020, inter alia ).",
"Our work differs in that CLARE uses three contextualized perturbations that produces more fluent and grammatical outputs.",
"Text generation with BERT.",
"Generation with masked language models has been widely studied in various natural language tasks, ranging from lexical substitution (Wu et al., 2019a; Zhou et al., 2019a; Qiang et al., 2020; Wu et al., 2019b, inter alia ) to non-autoregressive generation (Gu et al., 2018; Lee et al., 2018; Ghazvininejad et al., 2019; Wang and Cho, 2019; Ma et al., 2019; Sun et al., 2019; Ren et al., 2020; Zhang et al., 2020b, inter alia ).",
"We have presented CLARE, a contextualized adversarial example generation model for text.",
"It uses contextualized knowledge from pretrained masked language models, and can generate adversarial examples that are natural, fluent and grammatical.",
"With three contextualized perturbation patterns, Replace , Insert and Merge in our arsenal, CLARE can produce outputs of varied lengths and achieves a higher attack success rate than baselines and with fewer edits.",
"Human evaluation shows significant advantages of CLARE in terms of textual similarity, fluency and grammaticality.",
"We release our code and models at https://github.com/cookielee77/CLARE .",
"We would like to thank the reviewers for their constructive comments.",
"We thank NVIDIA Corporation for the donation of the GPU used for this research.",
"We also thank Tongshuang Wu, Guoyin Wang and Shuhuai Ren for their helpful discussions and feedback."
] | [
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"method",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Several cluster-based methods for semantic change detection with contextual embeddings emerged recently.",
"They allow a fine-grained analysis of word use change by aggregating embeddings into clusters that reflect the different usages of the word.",
"However, these methods are unscalable in terms of memory consumption and computation time.",
"Therefore, they require a limited set of target words to be picked in advance.",
"This drastically limits the usability of these methods in open exploratory tasks, where each word from the vocabulary can be considered as a potential target.",
"We propose a novel scalable method for word usage-change detection that offers large gains in processing time and significant memory savings while offering the same interpretability and better performance than unscalable methods.",
"We demonstrate the applicability of the proposed method by analysing a large corpus of news articles about COVID-19.",
"Studying language evolution is important for many applications, since it can reflect changes in the political and social sphere.",
"In the literature, the study of language evolution either focuses on long-term changes in the meaning of a word, or on more common short-term evolutionary phenomena, such as the word suddenly appearing in a new context, while keeping its meaning unchanged in a lexicographic sense.",
"We refer to all types of language evolutionshortor long-term, with or without meaning changeas word usage change, a broad category that includes semantic change, but also any shifts in the context in which a word appears.",
"Recent studies (Giulianelli et al., 2020; Martinc et al., 2020a) show that clustering of contextual embeddings could be a proxy for word usage change: if clusters, which in theory capture distinct word usages, are distributed differently across time periods, These authors contributed equally.",
"it indicates a possible change in word's context or even loss or gain of a word sense.",
"Thus, the cluster-based approach offers a more intuitive interpretation of word usage change than alternative methods, which look at the neighborhood of a word in each time period to interpret the change (Gonen et al., 2020; Martinc et al., 2020b) and ignore the fact that a word can have more than one meaning.",
"The main limitation of the cluster-based methods is the scalability in terms of memory consumption and time: clustering is applied to each word in the corpus separately and all occurrences of a word need to be aggregated into clusters.",
"For large corpora with large vocabularies, where some words can appear millions of times, the use of these methods is severely limited.",
"To avoid the scalability issue, cluster-based methods are generally applied to a small set of less than a hundred manually pre-selected words (Giulianelli et al., 2020; Martinc et al., 2020a).",
"This drastically limits the application of the methods in scenarios such as identification of the most changed words in a large corpus or measuring of usage change of extremely frequent words, since clustering of all of word's contextual embeddings requires large computational resources.",
"One way to solve the scalability problem using contextual embeddings is to average a set of contextual representations for each word into a single static representation (Martinc et al., 2020b).",
"Averaging, while scalable, loses a lot on the interpretability aspect, since word usages are merged into a single representation.",
"The method we propose in this paper tackles scalability and interpretability at the same time.",
"The main contributions of the paper are the following: A scalable method for contextual embeddings clustering that generates interpretable representations and outperforms other cluster-based methods.",
"A method of measuring word usage change between periods with the Wasserstein distance .",
"As far as we are aware, this is the first paper leveraging optimal transport for lexical semantic change detection.",
"A cluster filtering step, which balances the defi-ciencies of clustering algorithms and consistently improves performance.",
"An interpretation pipeline that automatically labels word senses, allowing a domain expert to find the most changing concepts and to understand how those changes happened.",
"The practical abilities of our method are demonstrated on a large corpus of news articles related to COVID-19, the Aylien Coronavirus News Dataset 1 .",
"We compute the degree of usage change of almost 8,000 words, i.e., all words that appear more than 50 times in every time slice of the corpus, in the collection of about half a million articles in order to find the most changing words and interpret their drift 2 .",
"Diachronic word embedding models have undergone a surge of interest in the last two years with the successive publications of three articles dedicated to a literature review of the domain (Ku-tuzov et al., 2018; Tahmasebi et al., 2018; Tang, 2018).",
"Most approaches build static embedding models for each time slice of the corpus and then make these representations comparable by either employing incremental updating (Kim et al., 2014) or vector space alignment (Hamilton et al., 2016b).",
"The alignment method has proved superior on a set of synthetic semantic drifts (Shoemark et al., 2019) and has been extensively used (Hamilton et al., 2016b; Dubossarsky et al., 2017) and improved (Dubossarsky et al., 2019) in the literature.",
"The recent SemEval Task on Unsupervised lexical semantic change detection has shown that this method is most stable and yields the best averaged performance across four SemEval corpora (Schlechtweg et al., 2020).",
"Yet another approach (Hamilton et al., 2016a; Yin et al., 2018) is based on comparison of neighbors of a target word in different time periods.",
"This approach has been recently used to tackle the scalability problem (Gonen et al., 2020).",
"The recent rise of contextual embeddings such as BERT (Devlin et al., 2019) and ELMO (Peters et al., 2018) introduced significant changes to word representations.",
"Contextual embeddings can be used for usage change detection by aggregating the information from the set of token embeddings.",
"This can be done either through averaging of all vectors within a time slice and then computing averaged vector similarity (Martinc et al., 2020b), by computing a pairwise distance between vectors from different time slices (Kutuzov and Giulianelli, 2020), or by clustering all token representations to approximate its set of senses (Giulianelli et al., 2020).",
"The analysis in this paper derives from this last set of methods, which demonstrate a higher performance than static embeddings methods at least on some datasets (Martinc et al., 2020a).",
"Automatic semantic shift detection has been used for text stream monitoring tasks, such as event detection (Kutuzov et al., 2017) viewpoint analysis (Azarbonyad et al., 2017) or monitoring of rapid discourse changes during crisis events (Stew-art et al., 2017).",
"None of these applications use clustering techniques and, as far as we are aware, only Martinc et al. (2020b) uses contextual embeddings for news stream analysis.",
"In this paper we demonstrate the large potential of contextual embeddings for the interpretable tracking of short-term changes in word usage, which has a practical application for crisis-related news monitoring.",
"The main motivation for this research are the scalability or interpretability issues of previous methods for word usage change detection.",
"The ones using contextual embeddings are either interpretable but unscalable (Giulianelli et al., 2020; Martinc et al., 2020a) or scalable but uninterpretable (Mar-tinc et al., 2020b).",
"The scalability issues of interpretable methods can be divided into two problems.",
"Memory consumption: Giulianelli et al. (2020) and Martinc et al. (2020a) apply clustering on all embeddings of each target word.",
"This procedure becomes unfeasible for large sets of target words or if the embeddings need to be generated on a large corpus, since too many embeddings need to be saved into memory for further processing.",
"To give an example, single-precision floating-point in Python requires 4 bytes of memory.",
"Each contextual embedding contains 768 floats (Devlin et al., 2019), leading each embedding to occupy 3072 bytes 3 .",
"To use the previous methods on the Aylien Coronavirus News Dataset, which contains 250M tokens, about 768 Gb RAM would be necessary to store the embeddings for the entire corpus.",
"If we limit our vocabulary to the 7,651 words that appear at least 50 times in every time slice and remove the stopwords (as we do in this work), we still need to generate contextual embeddings for 120M tokens, which is about 369 Gb of RAM.",
"Complexity of clustering algorithms: For the complexity analyses, we denote by d the dimension of the embedding, k is the number of clusters and n is the number of contextual embeddings, i.e., the number of word occurrences in the corpus.",
"The time complexity of the affinity propagation algorithm (the best performing algorithm according to Martinc et al. (2020a)) is O ( n 2 td ) , with t being the predefined maximum number of iterations of the data point message exchange.",
"The time complexity of the simpler k-means algorithm 4 can be stated as O ( tknd ) , where t is the number of iterations of Lloyd's algorithm (Lloyd, 1982).",
"As an example, consider the word coronavirus , which appears in the Aylien corpus about 1,2M times.",
"For k-means with k = 5 and a maximal number of iterations set to 300 (the Scikit library default), about 300 5 1 , 300 , 000 768 1 .",
"5 10 12 operations are conducted for the clustering.",
"With affinity propagation with the maximum number of iterations set to 200 (the default), clustering of the word coronavirus would require 1 , 300 , 000 2 200 768 2 .",
"6 10 17 operations, which is impossible to conduct in a reasonable amount of time on a high end desktop computer.",
"Contextual Embeddings Method with Interpretability Limitations: The averaging approach (Martinc et al., 2020b) eliminates the scalability problems: token embeddings for each word are not collected in a list but summed together in an element-wise fashion, which means that only 768 floats need to be saved for each word in the vocabulary.",
"The averaged word representation is obtained for each time slice by dividing the sum by the word count.",
"A single embedding per word is 3 If we ignore the additional memory of a Python containere.g., a Numpy list or a Pytorch tensorrequired for storing this data.",
"4 Here we are referring to the Scikit implementation of the algorithm employed in this work: https://scikit-learn.org/ stable/modules/generated/sklearn.cluster.KMeans.html.",
"saved, leading to only 23.5 Mb of RAM required to store the embeddings for 7,651 words.",
"These representations loose on the interpretability aspect, since all word usages are merged into a single averaged representation.",
"It makes the method inappropriate for some tasks such as automatic labelling of word senses, and in some cases affects the overall performance of the method (Martinc et al., 2020a).",
"Our word usage change detection pipeline follows the procedure proposed in the previous work (Mar-tinc et al., 2020a; Giulianelli et al., 2020): for each word, we generate a set of contextual embeddings using BERT (Devlin et al., 2019).",
"These representations are clustered using k-means or affinity propagation and the derived cluster distributions are compared across time slices by either using Jensen-Shannon divergence (JSD) (Lin, 2006) or the Wasserstein distance (WD) (Solomon, 2018).",
"Finally, words are ranked according to the distance measure, assuming that the ranking resembles a relative degree of usage shift.",
"The primary contributions of this work lay in the embedding generation step, which improves the scalability of the method, and in leveraging WD to compute the distance between clusters.",
"We also propose post-processing steps, which domain experts could use for the interpretation of results.",
"We now describe the pipeline in more details.",
"We use a pre-trained BERT model for each language of the evaluation corpora 5 .",
"All models have 12 attention layers and a hidden layer of size 768.",
"We fine-tune them for domain adaptation on each corpus as a masked language model for 5 epochs.",
"Then, we extract token embeddings from the fine-tuned models.",
"Each corpus is split into time slices.",
"The models are fed 256 tokens long sequences in batches of 16 sequences at once.",
"We generate sequence embeddings by summing the last four encoder output layers of BERT, following Devlin et al. (2019).",
"Next, we split each sequence into 256 subparts to obtain a separate contextual embedding of size 768 for each token.",
"Since one token does not necessarily correspond to one word due to byte-5 For German: bert-base-german-cased (https://deepset.",
"ai/german-bert, for English: bert-base-uncased model, for Latin: bert-base-multilingual-uncased model from the huggingface library, for Swedish: bert-base-swedish-uncased (https://github.com/af-ai-center/SweBERT).",
"pair tokenization, we average embeddings for each byte-pair token constituting a word to obtain embeddings for each occurrence of a word.",
"Next, after obtaining a contextual embedding vector for each target word in a specific sequence, we decide whether this vector should be saved to the list or merged with one of the previously obtained vectors for the same word in the same time slice.",
"To improve the scalability, we limit the number of contextual embeddings that are kept in the memory for a given word and time slice to a predefined threshold.",
"The threshold of 200 was chosen empirically from a set of threshold candidates (20, 50, 100, 200, 500) and offers a reasonable compromise between scalability and performance.",
"The new vector is merged if it is too similari.e., a duplicate or a near-duplicateto one of the saved vectors or if the list already contains a predefined maximum number of vectors (200 in our case).",
"If | L | 200 or if any vector in the list L is a near duplicate to e new , we find a vector e m in the list which is the closest to e new in terms of cosine similarity: e m = arg max e i L s ( e i , e new ) This element e m is then modified by summing it with e new : e m e m + e new The number of summed-up elements for each of the 200 groups in the list is stored besides their summed-up representations.",
"Once the model has been fed with all the sequences in the time slice, the final summed-up vector is divided by this number to obtain an averaged embedding.",
"By having only 200 merged word embeddings per word per time slice, and by limiting the vocabulary of the corpus to 7,651 target words, we require up to 4.7 Gb of space for each time slice, no matter the size of the corpus.",
"While this is still 200 times more space than if the averaging method was used (Martinc et al., 2020b), the conducted experiments show that the proposed method nevertheless keeps the bulk of the interpretability of the less scalable method proposed by Giulianelli et al. (2020), and offers competitive performance on several corpora.",
"After collecting 200 vectors for each word in each time slice, we conduct clustering on these lists to extract the usage distribution of the word at each period.",
"Clustering for a given word is performed on the set of all vectors from all time slices jointly.",
"We use two clustering methods previously applied for this task, namely k-means used in Giulianelli et al. (2020) and affinity propagation in Martinc et al. (2020a).",
"The main strength of affinity propagation is that the number of clusters is not defined in advance but inferred during training.",
"The clustering is usually skewed: a limited number of large clusters is accompanied with many clusters consisting of only a couple of instances.",
"Thus, affinity propagation allows to pick out the core senses of a word.",
"K-means tends to produce more even clusters.",
"Appearance of small clusters that contain only few instances and do not represent a specific sense or usage of the word is nevertheless relatively common, since BERT is sensitive to syntax and pragmatics, which are not necessarily relevant for usage change detection.",
"Another limitation of the k-means algorithm is that the number of clusters needs to be set in advance.",
"This means that if the number of actual word usages is smaller than a predefined number of clusters, k-means will generate more than one cluster for each word usage.",
"To compensate for these deficiencies, we propose an additional filtering and merging step.",
"A cluster is considered to be a legitimate representation of a usage of the word, if it contains at least 10 instances 6 .",
"We compute the average embedding inside each cluster, and measure the cosine distance (1 cosine similarity) between the average embeddings in each pair of legitimate clusters for a given word.",
"If the distance between two clusters is smaller than a threshold, the clusters are merged.",
"The threshold is defined as avg cd 2 std cd , where avg cd is the average pairwise cosine distance between all legitimate clusters and std cd is the standard deviation of that distance.",
"This merging procedure is applied recursively until the minimum distance between the two closest clusters is larger than the threshold.",
"After that, the merging proce-6 The threshold of 10 was derived from the procedure for manual labelling employed in the SemEval Task (Schlechtweg et al., 2020), where a constraint was enforced that the specific sense is attested at least 5 times in a specific time period in order to contribute word senses.",
"We set the overall threshold of 10, which roughly translates to 5 per time period, since all of our test corpora (besides Aylien) contain two time periods.",
"dure is applied to illegitimate clusters (that contain less than 10 instances), using the same threshold.",
"Illegitimate clusters could be added into one of the legitimate clusters or merged together to form a legitimate cluster with more than 10 instances.",
"If there is no cluster that is close enough to be merged with, the illegitimate cluster is removed.",
"After the clustering procedure described above, for each word in each time slice, we extract its cluster distribution and normalise it by the word frequency in the time slice.",
"Then target words are ranked according to the usage divergence between successive time slices, measured with the JSD or the WD 7 .",
"If a ground-truth ranking exists, the method can be evaluated using the Spearman Rank Correlation to compare the true and the outputted ranking.",
"In the exploratory scenario, the ranking is used to detect the most changing words and then investigate the most unevenly distributed clusters over time for the interpretation of the change.",
"JSD has been used for semantic shift detection in several recent papers, e.g. (Martinc et al., 2020a; Giulianelli et al., 2020; Kutuzov and Giulianelli, 2020).",
"Since this is the first paper applying WD for this purpose, we describe it in more details.",
"The motivation for using the WD (Solomon, 2018) is to take into account the position of the clusters in the semantic space when comparing them.",
"The JSD leverages semantic information encoded in the embeddings indirectly, distilled into two time-specific cluster distributions that JSD receives as an input.",
"In addition to cluster distributions, WD accesses characteristics of the semantic space explicitly, through a matrix of cluster averages (obtained by averaging embeddings in each cluster) of size T k 768 , where k is a number of clusters, T is a number of time slices and 768 is the embedding dimension.",
"This setup is a classical problem that can be solved using optimal transport (Peyr et al., 2019).",
"We denote with 1 and 2 the sets of k average embedding points in the two vector spaces, and with c 1 and c 2 the associated clusters distributions.",
"Thus, c 1 and c 2 are histograms on the simplex (pos-itive and sum to 1) that represent the weights of each embedding in the source ( 1 ) and target ( 2 ) distributions.",
"The task is to quantify the effort of moving one unit of mass from 1 to 2 using a cho-7 Using the POT package https://pythonot.github.io/.",
"sen cost function, in our case the cosine distance.",
"It is solved by looking for the transport plan , which is the minimal effort required to reconfigure c 1 's mass distribution into that of c 2 .",
"The WD is the sum of all travels that have to be made to solve the problem: WD ( c 1 , c 2 ) = min (cid:88) i,j i,j M i,j with 1 = c 1 ; (cid:124) 1 = c 2 ; 0 Where M R + m n is the cost matrix defining the cost to move mass from 1 to 2 .",
"We use the cosine similarity s , with M = 1 s ( 1 , 2 ) .",
"Interpretation.",
"Once the most changing words are detected, the next step is to understand how they change between two time slices by interpreting their clusters of usages.",
"Cluster distributions can be used directly to identify the clusters that are unevenly distributed across a time dimension.",
"However, a cluster itself may consist of several hundreds or thousands of word usages, i.e. sentences.",
"Interpreting the underlying sense behind each cluster by manually looking at the sentences is time-consuming.",
"To reduce human work, we extract the most discriminating words and bigrams for each cluster: by considering a cluster as a single document and all clusters as a corpus, we compute the term frequency inverse document frequency (tf-idf) score of each word and bigram in each cluster.",
"The stopwords and the words appearing in more than 80% of the clusters are excluded to ensure that the selected keywords are the most discriminant.",
"Thus, a ranked list of keywords for each cluster is obtained and top-ranked keywords are used for the interpretation of the cluster.",
"We use six existing manually annotated datasets for evaluation.",
"The first dataset, proposed by Gulordava and Baroni (2011), consists of 100 English words labelled by five annotators according to the level of semantic change between the 1960s and 1990s 8 .",
"To build the dataset, the annotators evaluated semantic change using their intuition, without looking at the context.",
"This procedure is problematic since an annotator may forget or not be aware of a particular sense of the word.",
"8 In order to make the proposed approach comparable to previous work, we remove four words that do not appear in the BERT vocabulary from the evaluation dataset, same as in Martinc et al. (2020a).",
"The organizers of the recent SemEval-2020 Task 1 Unsupervised Lexical Semantic Change Detection (Schlechtweg et al., 2020)employed another approach: the annotators had to decide whether a pair of sentences from different time periods convey the same meaning of the word (Schlechtweg and Schulte im Walde, 2020).",
"For each of the four languagesGerman, English, Latin and Swedish senses were manually annotated by labeling word senses in a pair of sentences drawn from different time periods.",
"All SemEval-2020 Task 1 corpora contain only two periods and the sentences are shuffled and lemmatized.",
"The lexical semantic change score is defined as the difference between word sense frequency distributions in the two time periods and measured by the Jensen-Shannon Distance (Lin, 2006).",
"The DURel dataset (Schlechtweg et al., 2018) is composed of 22 German words, ranked by semantic change by five annotators between two time periods, 17501799 and 18501899.",
"Similarly to SemEval, the ranking was build by evaluating the relatedness of pairs of sentences from two periods.",
"In order to conduct usage change detection on the target words proposed by Gulordava and Baroni (2011), we fine-tune the English BERT-base-uncased model and generate contextual embeddings on the Corpus of Historical American English (COHA) 9 .",
"We only use data from the 1960s to the 1990s (1960s has around 2.8M and 1990s 3.3M words), to match the manually annotated data.",
"For the SemEval Task 1 evaluation set, we fine-tune the BERT models and generate contextual embeddings on the four corpora provided by the organizers of the task, English (about 13.4M words), German (142M words), Swedish (182M words) and Latin (11.2M words).",
"Finally, we fine-tune BERT and generate embeddings on the German DTA corpus (17501799 period has about 25M and 18501899 has 38M tokens) 10 .",
"The results are shown in Table",
"1. We compare our scalable approach with the non-scalable clustering methods used by Giulianelli et al. (2020) and Martinc et al. (2020a).",
"Averaging (Martinc et al., 2020b) is the less interpretable method described in Section 3.",
"SGNS + OP + CD (Schlechtweg et al., 2019) refers to the state-of-the-art semantic change detection method employing non-contextual word embeddings: the Skip-Gram with Negative Sampling (SGNS) model is trained on two periods independently and aligned using Orthogonal Procrustes (OP).",
"Cosine Distance (CD) is used to compute the semantic change.",
"The Nearest Neighbors method (Gonen et al., 2020) also uses SGNS embeddings.",
"For each period, a word is represented by its top nearest neighbors (NN) according to CD.",
"Semantic change is measured as the size of the intersection between the NN lists of two periods.",
"On average, the proposed scalable clustering with filtering and merging of clusters leads to a higher correlation with gold standard than the standard non-scalable clustering methods: the best method (aff-prop WD) achieving a Spearman correlation with the gold standard of 0.474 compared to the best non-scalable k-means 5 JSD achieving the Spearman correlation of 0.391.",
"The method also outperforms averaging and NN, though it is outperformed by a large margin by the SGNS+OP+CD, achieving the score of 0.533.",
"The best performing clustering algorithm differs for different datasets.",
"On average, affinity propagation only outperforms k-means when filtering and merging of clusters is employed.",
"The effect of the filtering on k-means is positive on average but the difference is thin, as the number of clusters is low.",
"WD leads to better results than JSD on most of the corpora where averaging outperforms clustering, the only exception is DURel.",
"An extreme example is the Swedish SemEval dataset, where the clustering with JSD performs particularly poorly: using the WD, which takes into account the average embeddings on top of cluster distributions, greatly increases the correlation with the gold standard.",
"On the contrary, on COHA where averaging performs poorly in comparison to clustering, WD is under-performing.",
"The combination of scalable clustering with the interpretation pipeline opens new opportunities for diachronic corpus exploration.",
"In this section, we demonstrate how it could be used to analyze the Aylien Coronavirus News Dataset.",
"The corpus contains about 500k news articles related to COVID-19 from January to April 2020 11 , unevenly distributed over the months (160M words in March, 41M in February, 35M in April and 10M in January).",
"We split the corpus into monthly chunks and apply our scalable word usage change detection method.",
"We extract the top words with the highest average WD between the successive months to conduct a deeper analysis.",
"We exclude words that appear less than 50 times in each month to avoid spurious drifts due to words having too few occurrences in a time slice.",
"However, some drifts due to corpus artefacts remain, in particular dates such as '2019-20' .",
"Thus, words containing numbers and one-letter words are also removed.",
"In Table 2 we present the top 10 most drifting words extracted using k-means with k=5 and ranked according to the average WD across the four months 12 .",
"Among them, the word diamond is related to the cruise ship Diamond Princess, which suffered from an outbreak of COVID-19 and was quarantined for several weeks.",
"The word king , which is the second most changing word, is related to the King county, Washington, where the first confirmed COVID-19 related death in the USA appeared, and to the Netflix show Tiger King, which was released in March.",
"Thus, the primary context for this word changed several times, which is reflected in our results.",
"Other words are mostly constituent words in named entities, related e.g., to an American Society of Hematology (ASH) Research Collaborative's Data Hub, which is capturing data on subjects tested positive for COVID-19.",
"The results suggest that the model does what it is meant to do: for most words in the list it is possible to find an explanation why its usage changed during the beginning of 2020.",
"The list contains many proper names or proper name constituents, which could be either desirable or undesirable property, depending on research goals.",
"Some work focuses specifically on proper names (Hennig and Wilson, 2020), since they could be a good proxy to shifts in socio-political situations.",
"On the other hand, if 12 This is a rather arbitrary procedure: one can imagine that a domain expert would prefer a different frequency threshold or focus more on a given month.",
"The most time-consuming part is embedding extraction.",
"Once this is done, clustering and keyword extraction can be done as many times as necessary.",
"the focus of the study are shifts in more abstract concepts, then proper names could be filtered out before the embedding generation stage by employing named entity recognition tools.",
"The interpretation pipeline, described in Section 4.3, is illustrated in figures 1 and",
"2. We focus on two words, diamond and strain , to show the various phenomena that can be detected.",
"Diamond is the top drifting word in the entire vocabulary (see Table 2); it can be both a common noun and an entity, inducing usage drift when the entity appears in the newspapers after events with high media coverage.",
"Strain is the 38th word with the highest drift overall, and the 15th highest between February and March 2020.",
"It has several different senses whose usage vary across time following the events in the news.",
"We cluster their vector representations from the Aylien corpus using k-means with k = 5 and apply the cluster filtering and merging step.",
"Then, using tf-idf on unigrams and bigrams, we extract a set of keywords for each cluster to interpret the variations of their distribution.",
"The keywords and cluster distributions for the word diamond can be found in Figure",
"1. One of the clusters was removed at the filtering step, as it had less than 10 embeddings inside, and no other cluster was close enough.",
"A clear temporal tendency is visible from the cluster distribution in Figure 1: a new major usage appears in February, corresponding to the event of the quarantined cruise ship (Cluster 0); this association is revealed by the keywords for this cluster.",
"Moreover, the WD between January and February, when the outbreak happened, is 0 .",
"337 ; it is also very high between February and March ( 0 . 342 ).",
"It reflects the large gap between the cluster distributions, first with the appearance of Cluster 0 in February that made the other usages of the word diamond in the media almost disappear, and then the reappearance of other usages in March, when the situation around the cruise ship gradually normalized.",
"Cluster 1, that appears in March, is related to Neil Diamond's coronavirus parody of the song Sweet Caroline\" which was shared mid-March on the social media platforms and received a lot of attention in the US.",
"Cluster 3 is related to the diamond industry; it is much less discussed as soon as the pandemic breaks out in February.",
"Finally, Cluster 2 deals with several topics: Diamond Hill Capital, a US investment company, and the Wanda Diamond League, an international track and field athletic competition which saw most of its meetings postponed because of the pandemic.",
"This last cluster shows the limitations of our clustering: it is complex to identify and differentiate all the usages of a word perfectly.",
"The keywords and cluster distributions for the word strain can be found in Figure",
"2. This is a polysemic word with two main senses in our corpus: as the variant of a virus or bacteria (biological term) and as a severe or excessive demand on the strength, resources, or abilities of someone or something (Oxford dictionary).",
"Clusters 1, 3 and 4, which roughly match the second sense of the word (strain on healthcare systems in cluster 4, financial strain in cluster 3 and strain on resources and infrastructure in cluster 1), grow bigger across time, while clusters 0 and 2, which match the first sense of the word (e.g., new virus strain), shrink.",
"This behavior underlines the evolution of the concerns related to the pandemic in the newspapers.",
"We proposed a scalable and interpretable method for word usage change detection, which outperforms the non-scalable contextual embeddings-based methods by a large margin.",
"The new method also allows completely data-driven analysis of word sense dynamic in large corpora, which was impossible to conduct with unscalable methods.",
"This opens new opportunities in both language change studies and text stream monitoring tasks.",
"In this paper we focused on the latter application by analysing a large corpus of COVID-19 related news.",
"The method is outperformed by the state-of-the-art SGNS+OP+CD method.",
"We hypothesise that this can be connected with the fact that the sentences in all but one evaluation corpus (COHA) are shuffled, meaning that BERT models cannot leverage the usual sequence of 512 tokens as a context, but are limited to the number of tokens in the sentence.",
"We will explore this hypothesis in the future.",
"Despite achieving lower performance than the SGNS+OP+CD method, we nevertheless argue that our method offers a more fine-grained interpretation than methods based on non-contextual embeddings, since it accounts for the fact that words can have multiple meanings.",
"The cluster-based technique returns a degree of change and a set of sentence clusters for each word in the corpus, roughly corresponding to word senses or particular usages.",
"For this reason, the approach can be used for detection of new word usages and for tracing how these usages disappear, as we have shown in Section 6.",
"Even more, word usages and their distributions over time could be linked with real-word events by labeling sentence clusters with a set of cluster-specific keywords.",
"Overall, we observe a large disparity between results on different evaluation corpora.",
"This is in line with the results of the Semeval 2020 task 1 (Schlechtweg et al., 2020), where none of the best-performing methods was able to achieve the best result on all corpora.",
"In practice, different methods focus on different aspects of word usage change: Averaging and SGNS+OP+CD focus on average variation of word usage, hiding the intra-period diversity.",
"When it comes to clustering, JSD-based method detects the appearance or disappearance of a given usage, even a minor one.",
"The WD-based method, using information from both the cluster distribution and the embeddings vectors, represents a compromise between the averaging and the JSD-based methods.",
"In this paper we follow the general approach in semantic shift detection literature and apply our analysis on the raw text.",
"However, our results demonstrate that at least news monitoring applications would benefit from the application of the traditional text processing pipeline, in particular the extraction of named entities and dates.",
"This will be addressed in the future work.",
"This work has been supported by the European Union Horizon 2020 research and innovation programme under grants 770299 (NewsEye) and 825153 (EMBEDDIA), the project Computer-assisted multilingual news discourse analysis with contextual embeddings (CANDAS, J6-2581), and Project Development of Slovene in the Digital Environment (RSDO)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"result",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"other"
] |
[
"This paper introduces semantic frame forecast , a task that predicts the semantic frames that will occur in the next 10, 100, or even 1,000 sentences in a running story.",
"Prior work focused on predicting the immediate future of a story, such as one to a few sentences ahead.",
"However, when novelists write long stories, generating a few sentences is not enough to help them gain high-level insight to develop the follow-up story.",
"In this paper, we formulate a long story as a sequence of story blocks, where each block contains a fixed number of sentences ( e.g., 10, 100, or 200).",
"This formulation allows us to predict the follow-up story arc beyond the scope of a few sentences.",
"We represent a story block using the term frequencies (TF) of semantic frames in it, normalized by each frame's inverse document frequency (IDF).",
"We conduct semantic frame forecast experiments on 4,794 books from the Bookcorpus and 7,962 scientific abstracts from CODA-19, with block sizes ranging from 5 to 1,000 sentences.",
"The results show that automated models can forecast the follow-up story blocks better than the random, prior, and replay baselines, indicating the task's feasibility.",
"We also learn that the models using the frame representation as features outperform all the existing approaches when the block size is over 150 sentences.",
"The human evaluation also shows that the proposed frame representation, when visualized as word clouds, is comprehensible, representative, and specific to humans.",
"Our code is available at: https://github.com/ appleternity/FrameForecasting .",
"Writing a good novel is hard.",
"Creative writers can get stuck in the middle of their drafts and struggle to develop follow-up scenes.",
"Writing support systems, such as Heteroglossia (Huang et al., 2020a), generate paragraphs or ideas to help writers figure out the next part of the ongoing story.",
"However, Figure 1: The semantic frame forecast is a task that predicts the semantic frames that will occur in the next part of a story based on the texts written so far.",
"little literature focuses on plot prediction for long stories .",
"Much prior work focused on predicting the immediate future of a story, i.e., one to a few sentences later.",
"For example, the Creative Help system used a recurrent neural network model to generate the next sentence to support writing (Roemmele and Gordon, 2015); the Scheherazade system uses crowdsourcing and artificial intelligence techniques to interactively construct the narrative sentence by sentence (Li and Riedl, 2015); Clark et al. (2018) study machine-in-the-loop story writing where the machine constantly generates a suggestion for the next sentence to stimulate writers; and Metapho-ria (Gero and Chilton, 2019) generates metaphors, an even smaller unit, to inspire writers based on an input word by searching relations and ranking distances on ConceptNet (Liu and Singh, 2004).",
"Generating a coherent story across multiple sentences is challenging, even with cutting-edge pretrained models (See et al., 2019).",
"To generate coherent stories, researchers often first generate a high-level representation of the story plots and then use it as a guide to generate a full story.",
"For example, Martin et al. (2018) propose an event representation that uses an SVO tuple to generate story plots; Plan-and-write (Yao et al., 2019) uses the RAKE algorithm (Rose et al., 2010) to extract the keyword in each sentence to form a storyline and treat it as an intermediate representation; Fan et al. (2019) use predicate-argument pairs annotated by semantic role labelers to model the structure of stories; and Zhang et al. (2020) take words with a certain part-of-speech tag as anchors and show that using anchors as the intermediate representation can improve the story quality.",
"However, these projects all focused on short stories: The event representation is developed on a Wikipedia movie plot summary dataset (Bamman et al., 2013), where a summary has an average of 14.52 sentences; Plan-and-write uses the ROCStories dataset (Mostafazadeh et al., 2016), where each story has only 5 sentences; Fan et al. test their algorithm on the Writing-Prompts dataset (Fan et al., 2018), where stories have 734 words (around 42 sentences) on average; and Zhange et al.",
"'s anchor representation is developed on the VIST dataset (Huang et al., 2016), where a story has 5 sentences.",
"All the existing intermediate representations are generated on a sentence basis, meaning that the length of the representations increases along with the story length.",
"That is, when applying these representations to novels that usually have more than 50,000 words (as defined by the National Novel Writing Month (wik, 2020)), it is not likely that such representations can still work.",
"We thus introduce a new Frame Representation that compiles semantic frames into a fixed-length TF-IDF vector and a Semantic Frame Forecast task that aims to predict the next frame representation using the information in the current story block (see Figure 1).",
"Two different datasets are built to examine the effectiveness of the proposed frame representation: one from Bookcorpus (Zhu et al., 2015), a fiction dataset; and one from CODA-19 (Huang et al., 2020b), a scientific abstract dataset.",
"We establish several baselines and test them on different story block sizes, up to 1,000 sentences.",
"The result shows that the proposed frame representation successfully captures the story plot information and helps the semantic frame forecast task, especially for story blocks with more than 150 sentences.",
"To enable humans to perceive and comprehend frame representations, we further propose a process that visualizes a vector-based frame representation as word clouds.",
"Human evaluations show that word clouds represent a story block with reasonable specificity, and our proposed model produces word clouds that are more representative than that of BERT.",
"Automated Story Generation.",
"Classic story generation focuses on generating logically coherent stories, plot planning (Riedl and Young, 2010; Li et al., 2013), and case-based reasoning (Gervs et al., 2004).",
"Recently, several neural story generation models have been proposed (Peng et al., 2018; Fan et al., 2018), even including massive pretrained models (Radford et al., 2019; Keskar et al., 2019).",
"However, researchers realize that word-by-word generation models cannot efficiently model the long dependency across sentences (See et al., 2019).",
"Models using intermediate representations as guidance to generate stories are then proposed (Yao et al., 2019; Martin et al., 2018; Ammanabrolu et al., 2020; Fan et al., 2019; Zhang et al., 2020).",
"These works are developed toward short stories and thus are insufficient for modeling novels (See Section 1).",
"Automated Story Understanding.",
"Story understanding is a longstanding goal of AI (Roemmele and Gordon, 2018).",
"Several tests were proposed to evaluate AI models' ability to reason the event sequence in a story.",
"Roemmele et al. (2011) proposed the Choice of Plausible Alternatives (COPA) task, focusing on commonsense knowledge related to identifying causal relations between sequences.",
"Mostafazadeh et al. (2016) proposed the Story Cloze Test, in which the model is required to select which of two given sentences best completes a particular story.",
"Ippolito et al. (2019) proposed the Story Infilling task, which aims to generate the middle span of a story that is coherent with the foregoing context and will reasonably lead to the subsequent plots.",
"Under the broader umbrella of story understanding, some prior work aimed to predict the next event in a story (Granroth-Wilding and Clark, 2016) or to identify the right follow-up line in dialogues (Lowe et al., 2016).",
"As shown in Figure 1, we formulate a long story as a sequence of fixed-length story blocks.",
"Each story block (Figure 2 (1)) has a set of semantic frames (Figure 2 (2)) (Baker et al., 1998).",
"We convert a story block into the Frame Representation (Fig-ure 2 (3)), a TF-IDF vector over semantic frames, by computing the term frequency in that story block and the inverse document frequency over all the story blocks in the corpus.",
"FrameNet (Baker et al., Figure 2: The steps to generate the frame representation for story blocks. The human-readable word clouds are generated to illustrate the conceptual meaning of the frame representation. 1998) defined a total of 1,221 different semantic frames, so the generated TF-IDF has 1,221 dimensions.",
"The Semantic Frame Forecast is then defined as a task to predict the frame representation of the n+1 -th story block using the foregoing content, namely the n -th story block.",
"Evaluation Metric.",
"We use Cosine Similarity between the predicted vector and the gold-standard vector (complied from the human-written story block) for evaluation.",
"Many other metrics, such as Mean-Squared Error (MSE), also exist to measure the distance between two vectors.",
"We build the dataset from the existing Bookcorpus dataset (Zhu et al., 2015) and CODA-19 dataset (Huang et al., 2020b).",
"This section describes how we preprocess the data, remove undesired content, and build the final dataset.",
"Bookcorpus Dataset.",
"We obtain a total of 15 , 605 raw books and their corresponding meta data.",
"To get high-quality fictional content, we remove books using the following heuristic rules:",
"(i) short books whose size is less than 10KB;",
"(ii) books that contain HTML code;",
"(iii) books that are in the epub format (an e-book file for-mat);",
"(iv) books that are not in English;",
"(v) books that are in the Non-Fiction genre;",
"(vi) books that are in the Anthologies genre;",
"(vii) books that are in the Graphic Novels & Comics genre.",
"Since most books contain book information, author information, and some nonfictional content at the beginning and end of the book, we use regular expressions to match the term Chapter to locate the chapter title.",
"Only the contents between the first chapter title and the last chapter title are kept.",
"The last chapter is also removed as there are no certain boundaries to identify the story ending.",
"Books whose chapter titles are un-locatable are also removed.",
"After removing all the unqualified books, a total of 4 , 794 books were used in our dataset.",
"We transliterate all non-ASCII characters into ASCII characters using Unide-code (https://pypi.org/project/Unidecode/) to fulfill the requirement of Open-SESAME (Swayamdipta et al., 2017).",
"Open-SESAME is then used to parse the semantic frames for each sentence.",
"The books are split into training/validation/test sets following a 70/10/20 split, resulting in 3 , 357 , 479 , and 958 books, respectively.",
"To measure the effect of frame representation for different context lengths, we vary the story block length, using 5 , 10 , 20 , 50 , 100 , 150 , 200 , 300 , 500 , and 1 , 000 sentences.",
"When creating instances, we first split a book into story blocks with the specified length and extract all the consecutive two story blocks as instances when context window size (see Figure 1) is set to 1.",
"The IDF of the semantic frame is then computed over the story blocks using all the training sets.",
"Combining with the TF value in each story block, we convert story blocks into frame representations.",
"We use scikit-learn's implementation (Pedregosa et al., 2011) of TF-IDF but with a slight modification on IDF: Scikit-learn uses idf ( t ) = log ( n df ( t )+1 ) to compute a smoothing IDF, but we use idf ( t ) = log ( n df ( t ) ) .",
"The detailed statistic information is shown in Table 1.",
"CODA-19 Dataset.",
"We envision a broader definition of creativity in writing and attempt to apply story arc prediction technologies to the do-Block Size 5 10 20 50 100 150 200 300 500 1000 # Words Mean 71.7 143.5 286.9 717.2 1433.9 2149.8 2865.3 4293.7 7142.5 14212.3 # Frames Mean 17.5 35.0 69.9 174.5 348.6 522.1 695.4 1040.7 1727.3 3417.2 # Events Mean 10.0 20.0 39.9 99.8 199.4 298.9 398.2 596.4 991.2 1967.1 # Train 3,744,948 1,869,947 932,464 369,941 182,479 119,967 88,720 57,455 32,469 13,749 # Valid 574,840 287,054 143,166 56,838 28,073 18,466 13,672 8,881 5,035 2,166 # Test 1,054,816 526,687 262,625 104,198 51,396 33,776 24,987 16,178 9,138 3,861 Table 1: The statistic of Bookcorpus dataset in ten different story block lengths.",
"mains outside novels, for example, scholarly articles.",
"As an earlier exploration, we choose to use a smaller set of human-annotated abstracts (CODA-19 (Huang et al., 2020b)) rather than machine-extracted full text (CORD-19 (Wang et al., 2020a)) in our proof-of-concept study, avoiding formatting issues ( e.g. , reference format, parsing errors) and intensive data cleaning effort.",
"The original CODA-19 dataset contains 10 , 966 human-annotated English abstracts for five different aspects: Background, Purpose, Method, Finding/Contribution, and Other.",
"We remove sentences that are annotated as Other, an aspect for sentences that are not directly related to the content ( e.g., terminology definitions or copyright notices.) Abstracts that contain Uni-code characters are also removed.",
"A total of 7 , 962 abstracts are used in our dataset.",
"We then use Open-SESAME to parse the semantic frames for each sentence.",
"We adopt CODA-19's original split, where the training set, validation set, and testing set have 6 , 509 , 737 , and 716 abstracts, respectively.",
"Three different lengths of story block are used: 1 , 3 , and 5 .",
"We then create instances and compute TF-IDF as described above.",
"Table 2 shows the details.",
"We implement two naive baselines, an information retrieval baseline, two machine learning baselines, two deep learning baselines, an existing model and",
"Replay Model.",
"For each instance, the replay model takes the frame representation in the n -th story block as the prediction, i.e. , the same frames will occur again.",
"Prior Model.",
"The prior model computes the mean of the frame representation over the training set and uses it as the prediction for all the testing instances.",
"Information Retrieval with Frame Representation.",
"For each instance, the information retrieval model searches for the most similar story block in the training set and takes the frame representation from its next story block as the prediction.",
"In this setting, we adopt the cosine similarity on frame representations to measure the story similarity.",
"For block size 5 in the Bookcorpus dataset, there are around 3.7 million instances in the training set, which is infeasible to finish.",
"Random Forest with Frame Representation.",
"The foregoing story block's frame representation is used as the feature for prediction.",
"We use scikit-learn's implementation of Random Forest Regressor (Pedregosa et al., 2011) with a max depth of 3 and 20 estimators.",
"For block sizes that have more than one million training instances (5 and 10 in the Bookcorpus dataset), we randomly sample one million instances to train the model.",
"LGBM with Frame Representation.",
"This is the same as the previous setup but trained using the LGBM Regressor model (Ke et al., 2017) with the max depth 5, the number of leaves 5, and the number of estimators 100.",
"For block sizes that have more than one million training instances (5 and 10 in the Bookcorpus dataset), we randomly sample one million instances to train the model.",
"Denoising Autoencoder architecture (Bengio et al., 2013).",
"We feed in the foregoing story block's frame representation and output the frame representation for the follow-up story block.",
"Thirty percent of the input is dropped randomly.",
"The model is optimized using the cosine distance ( 1 cosine similarity ).",
"Both the encoder and decoder are created via five dense layers with a hidden size of 512.",
"We use a learning rate of 1e-5 and a batch size of 512 and train the model with the early stopping criteria of no improvement for 20 epochs.",
"The best model on the validation set is kept for testing.",
"Event Representation Model (Event-Rep).",
"We use Martin et al.",
"'s event representation (2018) on the foregoing story block as the feature.",
"An event tuple is defined as (cid:104) s, v, o, m (cid:105) , where s is the subject, v is the verb, o is the object, and m is the verb modifier.",
"We extract the dependency relation using the Stanza parser (Qi et al., 2020).",
"Unlike Martin el al.",
"'s implementation, where the empty placeholder only replaces unidentified objects and modifiers, we find that the subjects can also be frequently missing in fiction books.",
"For example, in Come out?",
"Zack asked.",
"Come out of where? .",
"In both cases here, the verb come does not have a subject.",
"In Fine, follow me. , follow has an object but does not have a subject.",
"Therefore, we allow s to have a placeholder in our implementation.",
"All words are stemmed by NLTK (Loper and Bird, 2002).",
"After extracting the event representation, the sequence of event tuples in the foregoing story block is fed into a five-layer LSTM model (Hochreiter and Schmidhuber, 1997) to predict its follow-up frame representation.",
"Note that the length of the event tuple sequence changes along with the block size.",
"We thus set the maximum length of the sequence to the 95th percentile of the length in the training set.",
"Sequences longer than the maximum length are left-truncated.",
"The model is trained with a hidden size of 512, a learning rate of 3e-5, a dropout rate of 0.05, and a batch size of 64.",
"We optimize the model using the cosine distance and apply the early stopping criteria of no improvement for three epochs.",
"The best model on the validation set is kept for testing.",
"BERT.",
"We take the pure text in the foregoing story block as the feature and apply the pretrained BERT model (Devlin et al., 2019).",
"BERT has a token length limitation, so we set the maximum length of tokens to 500 for Bookcorpus and 300 for CODA-19.",
"Sentences with more than 500 tokens are truncated from the left.",
"We take the [CLS] token representation from the last layer and add a dense layer on top of it to predict the follow-up frame representation.",
"The model is trained with a learning rate of 1e-5 and a batch size of 32.",
"We optimize the model using the cosine distance and apply the early stopping when no improvement for five epochs.",
"The model with the best score on the validation set is kept for testing.",
"SciBERT (For CODA-19 Only).",
"This is the same as the previous setting but is trained using the pretrained SciBERT model (Beltagy et al., 2019).",
"We only test this approach on the CODA-19 dataset since it is from the scientific domain.",
"GPT-2 (For Bookcorpus Only).",
"We also include a text generation model, GPT-2 (gpt2-xl) (Radford et al., 2019) with block sizes of 5, 10, 20, and 50.",
"Since GPT-2 is computationally expensive, we conduct the experiment on a subset of the dataset, where 1,000 instances are randomly selected.",
"We feed the text in the latest story block (n) into GPT-2 and generate 70, 150, 300, and 700 words for block sizes 5, 10, 20, and 50, respectively (5 sentences 70 words; 10 sentences 150 words in Bookcourpus, etc).",
"For stories that exceed the GPT-2's word limit, we truncate the text from the left.",
"Stories with block size larger than 100 would have more than 1400 words which by itself exceed the GPT-2's word limit.",
"Generated stories are then parsed by Open-SESAME to extract the semantic frames and turned into frame representations as the predictions.",
"Table 3 and Table 4 show the experimental results.",
"In this section, we summarize the main findings.",
"Predicting forthcoming semantic frames is remarkably challenging yet possible.",
"Machine-learning models outperform the two naive baselines for different story lengths.",
"In the Bookcorpus dataset, BERT performs the best for story blocks under 100 sentences, while LGBM performs the best for story blocks over 150 sentences.",
"In the CODA-19 dataset, SciBERT performs the best for block sizes of 1 and 3, while DAE performs the best for a block size of 5.",
"While the task is very challenging, these results shed light on the semantic frame forecast task.",
"However, the improvement Feature Model Block Size 5 10 20 50 100 150 200 300 500 1000 Replay Baseline .0654 .0915 .1237 .1737 .2163 .2448 .2665 .3000 .3462 .4155 Prior Baseline .2029 .2435 .2857 .3389 .3754 .3962 .4105 .4302 .4528 .4776 Frame IR Baseline -.0631 .0851 .1290 .1841 .2085 .2262 .2536 .2859 .3321 Frame Random Forest .2037 .2448 .2881 .3427 .3807 .4025 .4184 .4402 .4659 .4966 Frame LGBM .2072 .2506 .2967 .3564 .3995 .4255 .4441 .4711 .5048 .5510 Frame DAE .2082 .2515 .2966 .3547 .3976 .4223 .4400 .4598 .4898 .5280 Event Event-Rep .2111 .2541 .2994 .3532 .3929 .4126 .4280 .4453 .4626 .4792 Text BERT .2172 .2611 .3073 .3637 .4012 .4229 .4371 .4559 .4779 .5057 Text GPT-2 .0519 .0739 .0990 .1402 ---DELTA .0142 .0176 .0216 .0249 .0257 .0293 .0336 .0409 .0520 .0734 Table 3: Baseline result for Bookcorpus dataset.",
"is not big, as shown in the DELTA row, suggesting that semantic frame forecast requires more investigation and understanding.",
"Prior is a robust and strong baseline.",
"In both the Bookcorpus dataset and the CODA-19 dataset, the prior baseline is strong.",
"As the story gets longer, the performance also increases.",
"This suggests that when the story block gets bigger, more and more frames will constantly occur.",
"Replay baseline shows the relation of consecutive story blocks.",
"The replay baseline assumes that the events that happen now will likely happen again shortly.",
"The results in Table 3 and Table 4 partially confirm this assumption.",
"To understand more about the assumption, we use the replay baseline to predict the n+i -th story block from the n -th story block in the Bookcorpus dataset.",
"Figure 3 Figure 3: Using the replay baseline to predict the n+i th story block from the n -th story block (story block size = 5, 10, , 1000.) Things that happen in the current story block are more likely to happen again shortly.",
"shows the results.",
"We can see that things that happen now will be more likely to happen in the near future compared to story blocks farther from the current one.",
"Event-Rep works better in short stories.",
"In the Bookcorpus dataset, event representation works better than the frame representation in small block sizes (5, 10, and 20).",
"However, starting from a block size of 50, the model cannot perform as well as the other models.",
"We thus conclude that event representation works better in short stories.",
"The main reason is that event representations are generated on a sentence-by-sentence basis and will create overwhelming information on long stories.",
"The existing intermediate representations (see Section 1) are mostly generated from sentences and will likely have the same issue as the event representation.",
"Compared to the existing works, the proposed frame representation encodes a story block, no matter how long it is, into a fixed-length vector and therefore performs better on longer stories.",
"BERT performs very well in short stories.",
"The results of BERT and SciBERT in Table 3 and Table 4 show that textual information is helpful in predicting story blocks.",
"BERT performs better when the block size is under 100 in the Bookcorpus dataset and below 3 in CODA-19.",
"However, handling long texts remain challenging for BERT, as its computational complexity scales with the square of the token length.",
"Researchers started reducing the computation complexity for transformer-based models to allow modeling on long texts such as Linformer (Wang et al., 2020b), Longformer (Belt-agy et al., 2020), Reformer (Kitaev et al., 2020), and BigBird (Zaheer et al., 2020).",
"However, these models still require a lot of computation power and are not yet ready for general use.",
"The good performance does not merely come from the number of instances.",
"Deep learning methods often require more instances for training.",
"To show that the result in Table 3 is not mainly caused by the number of instances, we conduct the same experiment in Bookcorpus dataset using 88 , 720 training instances for block sizes ranging from 5 to 200.",
"Table 5 shows the results.",
"The performance is affected, but the conclusions we make above still stand, showing that the number of instances is not the main factor for our observations.",
"Meanwhile, we find that BERT is affected more than LGBM.",
"In Table 5 the performance of BERT drops by 0 .",
"0092 to 0 .",
"0051 compared to Table 3, but LGBM only drops 0 .",
"0039 to 0 .",
"0007 .",
"Although this suggests that the number of instances can cause the difference, it also shows that the frame representation can be used with fewer instances.",
"GPT-2 is not effective.",
"GPT-2 is not effective in predicting the story flow even though it can generate reasonable sentences.",
"Even the naive Replay window Feature Model Block Size 20 50 100 2 Frame LGBM .2989 .3590 .4029 Text BERT .3081 .3625 .4002 5 Frame LGBM .2989 .3617 .4065 Text BERT .3082 .3618 .3985 Table 6: Results of using 2 or 5 foregoing story blocks to predict the n+1 -th story block.",
"baseline outperforms the GPT-2 baseline in predicting the story block.",
"We hypothesize that GPT-2 is not good at maintaining the coherence among sentences or events, especially in the creative writing domain.",
"Similar phenomenons are also observed by others and used to motivate the need for guided generation models or progressive generation models (Wang et al., 2020c; Tan et al., 2020).",
"This paper focuses on using 1 story block to forecast the next one, i.e., window size = 1 (see Figure 1.) As a proof of concept, we use 2 and 5 blocks (window size = 2 and 5) for prediction, respectively.",
"We use two models: LGBM with frame representation, and BERT with text.",
"For LGBM, we simply concatenate the frame representation from the input story blocks to create the input vector.",
"For BERT, we put the event tuple and the text together as the input.",
"Table 6 shows the results.",
"While BERT does not benefit from using more contexts, LGBM's performance improves, suggesting the potentials of using a larger context window.",
"More research is required to understand the effects.",
"Different frames may contribute differently to the prediction of the follow-up story.",
"To understand which frame plays a more important role in the story, we conduct an ablation study by investigating the LGBM model on block 150.",
"We obliterate one frame from the input frame representation and record the performance change, where a higher performance deduction means the frame removed is more important.",
"A total of 50 frames are selected randomly for the ablation study.",
"Table 7 shows the top and bottom five frames.",
"We hypothesize that the more generic frames, such as State_continue and Proper_reference, might be less important to the follow-up stories, but it will require more research to understand the impacts fully.",
"We further evaluate the proposed method with humans.",
"We first visualize the vector of semantic frames into word clouds so that humans can perceive and comprehend it.",
"We then use online crowd workers to test the",
"(i) representativeness and",
"(ii) the specificity of the produced word clouds.",
"Visualizing Semantic Frame Vectors into Word Clouds.",
"Figure 4 shows the workflow of generating word clouds based on a frame representation ( i.e., a TF-IDF vector).",
"In FrameNet, lexi-cal units are the terms that can trigger a specific frame.",
"Compared to showing the name and defini-tion of a frame, lexical units are easier for people to read and comprehend.",
"Therefore, we use the top 30 frames (ranked by their TF-IDF weights) and randomly select up to three lexical units for each frame to form a word cloud.",
"The size and color of the lexical unit is computed according to the frame's TF-IDF weight, where a higher TF-IDF value will result in a larger font and darker color.",
"Finally, we arrange the lexical units into three word clouds on nouns, verbs, and adjectives using their POS tags.",
"All the word clouds are generated using d3-cloud (Davies, 2016).",
"Task Setups.",
"In this Human Intelligence Task (HIT), we show a story block ( n + 1 ) and two or three [noun, verb, adjective] word clouds ( n + 1 ) produced by different models based on the previous story block ( n ).",
"The goal is to measure, from the users' perspective, how much the generated word clouds represent the actual human-written followup stories.",
"We display the actual next story block ( n + 1 ) and the word clouds produced by different models based on the latest story block ( n ).",
"The workers from Amazon Mechanical Turk (MTurk) are asked to read the story and select the word cloud that better represents the story block.",
"In the worker interface, we set up a 3-minutes lock for submission and a reach-to-the-bottom lock for the story panel to make sure the workers read the story.",
"Nine different workers are recruited for each task 1 .",
"We empirically estimate the working time to be less than 6 minutes per HIT and set the price to $0.99/HIT (hourly wage = $10).",
"We choose block size 150 to compare two models: LGBM with frame representation and BERT with text.",
"Ground-truth word clouds are also added to some of the HITs to check the validity of the task.",
"A total of 150 instances are randomly selected from Boocorpus testing set.",
"For each instance, the foregoing story block is feed into LGBM and BERT to predict the frame representation of the follow-up story block.",
"Out of 150 instances, 50 instances are conducted with ground truth, where a total of three word clouds are shown.",
"Another 100 instances are used for comparing LGBM against BERT directly.",
"Results.",
"Over the 50 HITs where ground truth is included, (ground truth, LGBM, BERT) wins (32, 15, 16) HITs, respectively (ties exist.) Nine assignments are recruited from 9 workers for each HIT.",
"Regarding to the assignment voting, (ground truth, LGBM, BERT) gets (199, 131, 120) votes, respectively.",
"The result suggests that humans can correctly perceive the word clouds' conceptual meaning as the ground truth is rated the best.",
"Over the 100 HITs where LGBM and BERT are compared directly, (LGBM, BERT) wins (59, 41) HITs.",
"Regarding the assignment voting, (LGBM, BERT) gets (472, 428) votes, respectively.",
"The result shows that LGBM is better than BERT in a block size of 150, which aligns with our automatic evaluation results using cosine similarity (see Section 6.) 1 Four built-in worker qualifications are used: HIT Approval Rate ( 98%), Number of Approved HITs ( 3000 ), Locale (US Only), and Adult Content Qualification.",
"This task evaluates whether using the proposed word cloud to represent a story block is specific enough for humans to distinguish the correct story from the distractor.",
"Task Setups.",
"In this HIT, we show two story blocks ( n ) and one set of [noun, verb, adjective] word clouds ( n ).",
"Note that the current story block ( n ) and its ground-truth word cloud ( n ) are used to examine if humans can correctly perceive the semantic information from word cloud visualization.",
"One story block is the answer that is referred to by the word clouds and the other one is a distrac-tor.",
"Workers are asked to read the two story blocks and select the story block that is referred to by the word clouds.",
"Nine different workers are recruited for each HIT.",
"We use the same worker interface design and built-in worker qualifications as that of Section 7.1.",
"A HIT takes estimatedly 2.33 minutes and is priced at $0.38.",
"We choose block size 20 and use the ground-truth word clouds for this experiment.",
"Fifty instances from 50 different books are randomly selected from Bookcorpus testing set.",
"We also randomly select a 20-sentences story block from a different book as the distractor.",
"Results.",
"Of the 450 assignments, 63.8% of the answers were correct.",
"When aggregating the assignments using majority voting, 74% of 50 HITs were answered correctly.",
"We thus believe that it is reasonably specific for humans to represent a story block using the proposed word clouds.",
"This paper proposes a semantic frame forecast task that aims to forecast the semantic frames in the next 10, 100, or even 1,000 sentences of a story.",
"A long story is formulated as a sequence of story blocks that contain a fixed number of sentences.",
"We further introduce a frame representation that can encode a story block into a fixed-length TF-IDF vector over semantic frames.",
"Experiments on both the Bookcorpus dataset and CODA-19 dataset show that the proposed frame representation helps semantic frame forecast in large story blocks.",
"By visualizing the frame representation as word clouds, we also show that it is comprehensible, representative, and specific to humans.",
"In the future, we will introduce the frame representation into story generation models to ensure coherence when generating long stories.",
"We will also explore the possibility of supporting writers to develop the next part of their stories by generating semantic frames as clues using semantic frame forecast.",
"We would like to thanks the Huck Institutes of the Life Sciences' Coronavirus Research Seed Fund (CRSF) and the College of IST COVID-19 Seed Fund at Penn State University who support the construction of CODA-19.",
"We also thank Tiffany Knearem for the feedback for designing word cloud visualization and workers who participated the human evaluation study."
] | [
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"result",
"method",
"objective",
"other",
"other"
] |
[
"A stereotype is an over-generalized belief about a particular group of people, e.g., Asians are good at math or African Americans are athletic .",
"Such beliefs (biases) are known to hurt target groups.",
"Since pretrained language models are trained on large real world data, they are known to capture stereotypical biases.",
"It is important to quantify to what extent these biases are present in them.",
"Although this is a rapidly growing area of research, existing literature lacks in two important aspects: 1) they mainly evaluate bias of pretrained language models on a small set of artificial sentences, even though these models are trained on natural data; 2) current evaluations focus on measuring bias without considering the language modeling ability of a model, which could lead to misleading trust on a model even if it is a poor language model.",
"We address both these problems.",
"We present StereoSet, a large-scale natural English dataset to measure stereotypical biases in four domains: gender, profession, race, and religion.",
"We contrast both stereotypical bias and language modeling ability of popular models like BERT, GPT2, ROBERTA , and XLNET .",
"We show that these models exhibit strong stereotypical biases.",
"Our data and code are available at https://stereoset.",
"mit.edu .",
"A key idea behind the current success of neural network models for language is pretrained representations such as word embeddings (Mikolov et al., 2013; Pennington et al., 2014) and pretrained language models (Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019).",
"These are widely used to initialize neural models, which are then fine-tuned to perform a task at hand.",
"Typically, these are learned from massive text cor-Choose the appropriate word: Domain: Gender Target: Girl Context: Girls tend to be more ____ than boys Option 1: soft (stereotype) Option 2: determined (anti-stereotype) Option 3: fish (meaningless)",
"(a) The Intrasentence Context Association Test Choose the appropriate sentence: Domain: Race Target: Arab Context: He is an Arab from the Middle East.",
"Option 1: He is probably a terrorist with bombs.",
"(stereotype)",
"Option 2: He is a pacifist.",
"(anti-stereotype)",
"Option 3: My dog wants a walk.",
"(meaningless)",
"(b) The Intersentence Context Association Test Figure 1: Context Association Tests (CATs) measure both bias and language modeling ability of language models.",
"pora using variants of language modeling objective (i.e., predicting a word given its surrounding context).",
"In the recent years, these representations empowered neural models to attain unprecedented levels of performance gains on multiple language tasks.",
"The resulting models are being deployed widely as services on platforms like Google Cloud and Amazon AWS to serve millions of users.",
"While this growth is commendable, there are concerns about the fairness of these models.",
"Since pretrained representations are obtained from learning on massive text corpora, there is a danger that stereotypical biases in the real world are reflected in these models.",
"For example, GPT2 (Radford et al., 2019), a pretrained language model, has shown to generate unpleasant stereotypical text when prompted with context containing certain races such as African-Americans (Sheng et al., 2019).",
"In this work, we assess the stereotypical biases of popular pretrained language models.",
"The seminal works of Bolukbasi et al. (2016) and Caliskan et al. (2017) show that word embeddings such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) contain stereotypical biases using diagnostic methods like word analogies and association tests.",
"For example, Caliskan et al. show that male names are more likely to be associated with career terms than female names where the association is measured using embedding similarity.",
"Recently, studies have attempted to evaluate bias in contextual word embeddings where a word is provided with artificial context (May et al., 2019; Kurita et al., 2019), e.g., the contextual embedding of man is obtained from the embedding of man in the sentence This is a man .",
"However, these have limitations.",
"First, the context does not reflect the natural usage of a word.",
"Second, they require stereotypical attribute terms to be predefined (e.g., pleasant and unpleasant terms).",
"Third, they focus on single word terms and ignore multiword terms like construction worker .",
"Lastly, they study bias of a model independent of its language modeling ability which could lead to undeserved trust in a model if it is a poor language model.",
"In this work, we propose methods to evaluate stereotypical bias of pretrained language models.",
"These methods do not have the aforementioned limitations.",
"Specifically, we design two different association tests, one for measuring bias at sentence level ( intrasentence ), and the other at discourse level ( intersentence ) as shown in Figure 1..",
"In these tests, each target term (e.g., Arab) is provided with a natural context in which it appears, along with three possible associative contexts.",
"The associative contexts help us to evaluate the biases of the model, as well as measure its language modeling performance.",
"We crowdsource StereoSet , a dataset for associative contexts in English containing 4 target domains, 321 target terms and 16,995 test instances (triplets).",
"Following previous literature (Greenwald and Banaji, 1995; Bolukbasi et al., 2016; Caliskan et al., 2017), we define a stereotype as an overgeneralized belief about a particular group of people, e.g., Asians are good at math .",
"Our primary focus is on detecting the presence of stereotypes in pretrained language models.",
"We leave the details of mitigating bias from pretrained language models to future work.",
"We design our formulation around the desiderata of an ideal language model.",
"An ideal language model should be able to perform the task of language modeling, i.e., it should rank meaningful contexts higher than meaningless contexts.",
"For example, it should tell us that Our housekeeper is a Mexican is more probable than Our housekeeper is a banana .",
"Second, it should not exhibit stereotypical bias, i.e., it should avoid ranking stereotypical contexts higher than anti-stereotypical contexts, e.g., Our housekeeper is a Mexican and Our housekeeper is an American should be equally possible.",
"We desire equally possible instead of anti-stereotype over stereotype because any kind of overgeneralized belief is known to hurt target groups (Czopp et al., 2015).",
"If the model consistently prefers stereotypes over anti-stereotypes, we say that the model exhibits stereotypical bias.",
"Another approach would be to rank a neutral context higher over stereotypical or anti-stereotypical context.",
"In practice, we found that collecting neutral contexts are prone to implicit biases and has low inter-annotator agreement (Section 4).",
"Based on these observations, we develop the Context Association Test (CAT), a test that measures the language modeling ability as well as the stereotypical bias of pretrained language models.",
"Although language modeling has standard evaluation metrics such as perplexity, due to varying vocabulary sizes of different pretrained models, this metric becomes incomparable across models.",
"In order to analyse the relationship between language modeling ability and stereotypical bias, we define a simple metric that is appropriate for our task.",
"Evaluating the full language modeling ability of models is beyond the scope of this work.",
"In CAT, given a context containing a target group (e.g., housekeeper), we provide three different ways to instantiate this context.",
"Each instantiation corresponds to either a stereotypical, anti-stereotypical, or a meaningless association.",
"The stereotypical and anti-stereotypical associations are used to measure stereotypical bias, and the meaningless association is used to ensure that an unbiased language model still retains language modeling ability.",
"We include the meaningless association in order to provide a standardized benchmark across both masked and autoregressive language models, which cannot be done with common metrics such as perplexity.",
"Specifically, we design two types of association tests, intrasentence and intersentence CATs , to assess language modeling and stereotypical bias at sentence level and discourse level.",
"Figure 1 shows an example for each.",
"Our intrasentence task measures the bias and the language modeling ability at sentence-level.",
"We create a fill-in-the-blank style context sentence describing the target group, and a set of three attributes, which correspond to a stereotype, an anti-stereotype, and a meaningless option (Figure 1a).",
"In order to measure language modeling and stereotypical bias, we determine which attribute has the greatest likelihood of filling the blank, i.e., which of the instantiated contexts is more likely.",
"Our intersentence task measures the bias and the language modeling ability at the discourse-level.",
"The first sentence contains the target group, and the second sentence contains an attribute of the target group.",
"Figure 1b shows the intersentence task.",
"We create a context sentence with a target group that can be succeeded with three attribute sentences corresponding to a stereotype, an anti-stereotype and a meaningless option.",
"We measure the bias and language modeling ability based on which attribute sentence is likely to follow the context sentence.",
"Our work is inspired from related attempts that aim to measure bias in pretrained representations such as word embeddings and language models.",
"The two popular methods of testing bias in word embeddings are word analogy tests and word association tests.",
"In word analogy tests, given two words in a certain syntactic or semantic relation ( man king ), the goal is generate a word that is in similar relation to a given word ( woman queen ).",
"Mikolov et al. (2013) showed that word embeddings capture syntactic and semantic word analogies, e.g., gender, morphology etc.",
"Bolukbasi et al. (2016) build on this observation to study gender bias.",
"They show that word embeddings capture several undesired gender biases (seman-tic relations) e.g. doctor : man :: woman : nurse .",
"Manzini et al. (2019) extend this to show that word embeddings capture several stereotypical biases such as racial and religious biases.",
"In the word embedding association test (WEAT, Caliskan et al. 2017), the association of two complementary classes of words, e.g., European and African names, with two other complementary classes of attributes that indicate bias, e.g., pleasant and unpleasant attributes, are studied to quantify the bias.",
"The bias is defined as the difference in the degree with which European names are associated with pleasant and unpleasant attributes in comparison with African names being associated with those attributes.",
"Here, the association is defined as the similarity between the name and attribute word embeddings.",
"This is the first large scale study that showed word embeddings exhibit several stereotypical biases and not just gender bias.",
"Our inspiration for CAT comes from WEAT.",
"May et al. (2019) extend WEAT to sentence encoders, calling it the Sentence Encoder Association Test (SEAT).",
"For a target term and its attribute, they create artificial sentences using generic context of the form \"This is [target].\" and \"They are [attribute].\" and obtain contextual word embeddings of the target and the attribute terms.",
"They repeat Caliskan et al. (2017)'s study using these embeddings and cosine similarity as the association metric but their study was inconclusive.",
"Later, Kurita et al. (2019) show that cosine similarity is not the best association metric and define a new association metric based on the probability of predicting an attribute given the target in generic sentential context, e.g., [target] is [mask] , where [mask] is the attribute.",
"They show that similar observations of Caliskan et al. (2017) are observed on contextual word embeddings too.",
"Our intrasentence CAT is similar to their setting but with natural context.",
"We also go beyond intrasentence to propose intersentence CATs, since language modeling is not limited at sentence level.",
"Concurrent to our work, Nangia et al. (2020) introduced CrowS-Pairs, which examines stereotypical bias via minimal pairs.",
"However, CrowS-Pairs only studies bias within a single sentence (intrasentence) and ignores discourse-level (inter-sentence) measurements.",
"Furthermore, StereoSet contains an order of magnitude of data that contains greater variety, and hence, has the potential to detect a wider range of biases that may be otherwise overlooked.",
"Lastly, StereoSet measures bias across both masked and autoregressive language models, while CrowS-Pairs only measures bias in masked language models.",
"Another method to evaluate bias in pretrained representations is to measure bias on extrinsic tasks like coreference resolution (Rudinger et al., 2018; Zhao et al., 2018) and sentiment analysis (Kir-itchenko and Mohammad, 2018).",
"This method fine-tunes pretrained representations on the target task.",
"The bias in pretrained representations is estimated by the target task's performance.",
"However, it is hard to segregate the bias of task-specific training data from the pretrained representations.",
"Our CATs are an intrinsic way to evaluate bias in pretrained models.",
"In StereoSet, we select four domains as the target domains of interest for measuring bias: gender, profession, race and religion.",
"For each domain, we select terms (e.g., Asian) that represent a social group.",
"For collecting target term contexts and their associative contexts, we employ crowdworkers via Amazon Mechanical Turk.",
"1 We restrict ourselves to crowdworkers in USA since stereotypes could change based on the country.",
"Table 1 shows the overall statistics of StereoSet.",
"We also provide a full data statement in Section 9 (Bender and Friedman, 2018).",
"We curate diverse set of target terms for the target domains using Wikidata relation triples (Vran-decic and Krtzsch, 2014).",
"A Wikidata triple is of the form < subject, relation, object > (e.g., < Brad Pitt, P106, Actor > ).",
"We collect all objects occurring with the relations P106 (profession), P172 (race), and P140 (religion) as the target terms.",
"We manually filter terms that are either infrequent or too fine-grained ( assistant producer is merged with producer ).",
"We collect gender terms from 1 Screenshots of our Mechanical Turk interface and details about task setup are available in the Section 9.6.",
"Nosek et al. (2002).",
"A list of target terms is available in Appendix A.1.",
"In the intrasentence CAT, for each target term, a crowdworker writes attribute terms that correspond to stereotypical, anti-stereotypical and meaningless associations of the target term.",
"Then, they provide a context sentence containing the target term.",
"The context is a fill-in-the-blank sentence, where the blank can be filled either by the stereotype term or the anti-stereotype term but not the meaningless term.",
"In the intersentence CAT, they first provide a sentence containing the target term.",
"Then, they provide three associative sentences corresponding to stereotypical, anti-stereotypical and meaningless associations.",
"These associative sentences are such that the stereotypical and the anti-stereotypical sentences can follow the target term sentence but the meaningless ones cannot follow the target term sentence.",
"We also experimented with a variant that asked crowdworkers to provide a neutral association for the target term, but found that crowdworkers had significant trouble remaining neutral.",
"In the validation step (next section), we found that many of these neutral associations are often classified as stereotype or anti-stereotype by multiple validators.",
"We conjecture that attaining neutrality is hard is due to anchoring bias (Tversky and Kah-neman, 1974), i.e., stereotypical associations are easy to think and access and could implicitly affect crowdworkers to tilt towards them.",
"Therefore, we discard the notion of neutrality.",
"Some examples are shown in Appendix A.4.",
"In order to ensure that stereotypes reflect common views, we validate the data collected in the above step with additional workers.",
"For each context and its associations, we ask five validators to classify each association into a stereotype, an anti-stereotype or a meaningless association.",
"We only retain CATs where at least three validators agree on the labels.",
"2 This filtering results in selecting 83% of the CATs, indicating that there is regularity in stereotypical views among the workers.",
"Table 10 shows detailed agreement scores for 2 One can increase the quality of the data further by selecting examples where four or more workers agree upon.",
"stereotypes computed using the average of annotator agreement per example.",
"Are people prone to view stereotypes negatively?",
"To answer this question, we classify stereotypes into positive and negative sentiment classes using a sentiment classifier (details in Appendix A.2).",
"As evident in Table 2, people do not always associate stereotypes with negative associations (e.g., Asians are good at math has positive senti-ment).",
"However, people associate stereotypes with relatively more negative associations than anti-stereotypes (41% vs. 33%).",
"We also extract keywords in StereoSet to analyze which words are most commonly associated with target groups.",
"We define a keyword as a word that is more frequent in StereoSet than the natural distribution of words (Kilgarriff, 2009; Jakubicek et al., 2013).",
"Table 3 shows the top keywords of each domain.",
"These keywords indicate that target terms in gender and race are associated with physical attributes such as beautiful , feminine , masculine , etc., professional terms are associated with behavioural attributes such as pushy, greedy, hardwork , etc., and religious terms are associated with belief attributes such as diety, forgiving, reborn , etc.",
"This aligns with expectations and indicates that multiple annotators use similar attributes.",
"We split StereoSet based on the target terms: 25% of the target terms and their instances for the development set and 75% for the test set.",
"We ensure terms in the development set and test set are disjoint.",
"We do not have a training set since this defeats the purpose of StereoSet, which is to measure the biases of pretrained language models (and not the models fine-tuned on StereoSet).",
"Our desiderata of an ideal language model is that it excels at language modeling while not exhibiting stereotypical biases.",
"In order to determine success at both these goals, we evaluate both language modeling and stereotypical bias of a given model.",
"We pose both problems as ranking problems.",
"Language Modeling Score ( lms ) In the language modeling case, given a target term context and two possible associations of the context, one meaningful and the other meaningless, the model has to rank the meaningful association higher than meaningless association.",
"The meaningful association corresponds to either the stereotype or the anti-stereotype option.",
"We define the language modeling score ( lms ) of a target term as the percentage of instances in which a language model prefers the meaningful over meaningless association.",
"We define the overall lms of a dataset as the average lms of the target terms in the split.",
"The lms of an ideal language model is 100, i.e., for every target term in a dataset, the model always prefers the meaningful association of the term.",
"As discussed in Section 2.2, the goal of this metric is not to evaluate the full scale language modeling ability, but only to provide an reasonable metric that allows comparison between different models to analyze the relationship between language modeling ability and stereotypical bias.",
"Stereotype Score ( ss ) Similarly, we define the stereotype score ( ss ) of a target term as the percentage of examples in which a model prefers a stereotypical association over an anti-stereotypical association.",
"We define the overall ss of a dataset as the average ss of the target terms in the dataset.",
"The ss of an ideal language model is 50, for every target term, the model prefers neither stereotypical associations nor anti-stereotypical associations.",
"Idealized CAT Score ( icat ) StereoSet motivates a question around how practitioners should prefer models for real-world deployment.",
"Just because a model has low stereotypical bias does not mean it is preferred over others.",
"For example, although a random language model exhibits the lowest stereotypical bias ( ss = 50) it is the worst language model ( lms = 50).",
"While model selection desiderata is often task-specific, we introduce a simple point-estimate called the idealized CAT ( icat ) score for model comparison assuming equal importance to language modeling ability and stereotypical bias.",
"We define the icat score as lms min ( ss, 100 ss ) 50 centered around the idea that an ideal language model has an icat score of 100 and a stereotyped model has a score of",
"0. Appendix A.6 presents a detailed formulation and Figure 2 (Appendix) highlights this idea.",
"IDEALLM We define this hypothetical model as the one that always picks correct associations for a given target term context.",
"It also picks equal number of stereotypical and anti-stereotypical associations over all the target terms.",
"So the resulting lms and ss scores are 100 and 50 respectively.",
"STEREOTYPEDLM We define this hypothetical model as the one that always picks a stereotypical association over an anti-stereotypical association.",
"So its ss is 100 irrespective of its lms .",
"RANDOMLM We define this model as the one that picks associations randomly, and therefore its lms and ss scores are both 50.",
"SENTIMENTLM In Section 4.4, we saw that stereotypical instantiations are more frequently associated with negative sentiment than anti-stereotypes.",
"In this baseline, we assess if sentiment can be used to detect a stereotypical association.",
"For a given a pair of context associations, the model always picks the association with the most negative sentiment.",
"In this section, we evaluate pretrained models such as BERT (Devlin et al., 2019), ROBERTA (Liu et al., 2019), XLNET (Yang et al., 2019) and GPT2 (Radford et al., 2019) on StereoSet.",
"While scoring sentences using autoregressive language models is well-defined, there is no corresponding scoring mechanism for masked language models.",
"As a result, we evaluate our models using both likelihood-based scoring and psuedolikelihood scoring (Nangia et al., 2020).",
"Likelihood-based Scoring For intrasentence CATs, we define the score as the log probability of an attribute term to fill the blank.",
"If the attribute consists of multiple subwords, we iteratively unmask the subwords from left to right, and compute the average per-subword probability.",
"We rank a given pair of attribute terms based on these probabilities (the one with higher probability is pre-ferred).",
"In intersentence CATs, inspired by Devlin et al. (2019), we use a Next Sentence Prediction (NSP) task to rank the possible associations.",
"For all models, we train identical Next Sentence Prediction heads on identical datasets (details given in Appendix A.5), and compute the log likelihood that any given target sentence follows the context.",
"Given a pair of associations, we rank each association using this score.",
"Psuedo-likelihood Scoring Nangia et al. (2020) adopts psuedo-likelihood based scoring (Salazar et al., 2020) that does not penalize less frequent attribute terms.",
"In intrasentence CAT, we choose to never mask the attribute term but mask each context term one at a time and measure the psuedo-probability of the sentence given the attribute term.",
"We refer the reader to Nangia et al. (2020) for more information on this scoring mechanism.",
"In intersentence CATs, we measure the psuedolikelihood of the context sentence conditioned on the attribute sentence by iteratively masking the tokens in the context sentence while keeping the attribute sentence unchanged.",
"Unlike above models, GPT2 is a generative model in an auto-regressive setting.",
"For the intrasentence CAT, we instantiate the blank with an attribute term and compute the probability of the full sentence.",
"Given a pair of associations, we rank each association using this score.",
"For the intersentence CAT, our scoring mechanism mirrors that for masked language models.",
"If the likelihood-based scoring mechanism is used, then we train an NSP head on identical datasets (details given in Appendix A.5) and compute the log likelihood that any given target sentence follows the context.",
"If the masked language models are scored with psuedo-likelihood, then we measure the effect of the context sentence by measuring the joint probability of the attribute sentence with and without the context.",
"Given a pair of associations, we rank each association by the ratio of these probabilities.",
"Table 4 shows the overall results of baselines and models on StereoSet test set when using likelihood-based scoring, and Table 5 shows the results when using psuedo-likelihood based scoring.",
"The results exhibit similar trends on the development and test sets.",
"Since the initial version of this paper 3 used likelihood-based scoring, we mainly center the discussion around it as the trends are similar to pseudo-likelihood.",
"Baselines vs. Models As seen in Table 4, all pretrained models have higher lms values than RANDOMLM indicating that these are better language models as expected.",
"Among models, GPT2-large is the best performing language model (88.3) followed by GPT2-medium (85.9).",
"Coming to stereotypical bias, all pretrained models demonstrate more stereotypical behavior than RANDOMLM.",
"While GPT2-large is the most stereotypical model of all pretrained models (60.1), ROBERTA -base is the least stereotypical model (50.5).",
"SENTIMENTLM achieves the highest stereotypical score compared to all pretrained models, indicating that sentiment can indeed be exploited to detect stereotypical associations.",
"However, its language model performance is worse, which is expected, since sentiment alone isn't sufficient to distinguish meaningful and meaningless sentences.",
"Relation between lms and ss All models exhibit a strong correlation between lms and ss (Spearman rank correlation of 0.87).",
"As the language model becomes stronger, its stereotypical bias ( ss ) does too.",
"We build the strongest language model, ENSEMBLE , using a linear weighted combination of BERT-large, GPT2-medium, and GPT2-large, which is also found to be the most biased model ( ss = 62.5).",
"The correlation between lms and ss is unfortunate and perhaps unModel LanguageModelScore( lms ) StereotypeScore( ss ) IdealizedCATScore( icat ) Test set IDEALLM 100 50.0 100 STEREOTYPEDLM 100 0.0 RANDOMLM 50.0 50.0 50.0 SENTIMENTLM 65.1 60.8 51.1 BERT-base 82.3 57.1 70.7 BERT-large 81.1 58.0 68.1 ROBERTA -base 83.5 58.5 69.4 ROBERTA -large 83.4 59.8 67.0 XLNET -base 60.5 52.4 57.6 XLNET -large 61.3 54.0 56.5 GPT2 86.8 59.0 71.1 GPT2-medium 88.6 61.6 68.0 GPT2-large 89.6 62.7 66.8 ENSEMBLE 90.1 62.2 68.1 Table 5: Performance of pretrained language models on the StereoSet test set, measured using psuedolikelihood scoring for the masked language models.",
"avoidable as long as we rely on the real world distribution of corpora to train language models since these corpora are likely to reflect stereotypes.",
"Amongst the models, GPT2 exhibits more unbiased behavior than other models ( icat score of 73.0).",
"However, this metric is not intended as the sole criterion for model selection.",
"Further research is required in designing better metrics.",
"Impact of model size For a given architecture, all of its pretrained models are trained on the same corpora but with different number of parameters.",
"For example, both BERT-base and BERT-large are trained on Wikipedia and BookCorpus (Zhu et al., 2015) with 110M and 340M parameters respectively.",
"As the model size increases, we see that its language modeling ability ( lms ) increases, and correspondingly its stereotypical score.",
"Impact of scoring mechanism We evaluate models using both likelihood based scoring and psuedo-likelihood based scoring.",
"First, we note that likelihood-based ( ll ) scoring is higher than psuedo-likelihood-based ( pll ) scoring by a narrow margin (avg lms ll = 79 . 88 , avg lms pll = 79 . 68 ).",
"For intrasentence CATs, psuedo-likelihood outperforms likelihood scoring by a wide margin (avg lms ll = 75 . 7 , avg lms pll = 79 . 4 ).",
"However, psuedo-likelihood scoring is significantly degraded for intersentence CATs (avg lms ll = Model LanguageModelScore( lms ) StereotypeScore( ss ) IdealizedCATScore( icat ) Intrasentence Task BERT-base 82.5 57.5 70.2 BERT-large 82.9 57.6 70.3 ROBERTA -base 71.9 53.6 66.7 ROBERTA -large 72.7 54.4 66.3 XLNET -base 70.3 53.6 65.2 XLNET -large 74.0 51.8 71.3 GPT2 91.0 60.4 72.0 GPT2-medium 91.2 62.9 67.7 GPT2-large 91.8 63.9 66.2 ENSEMBLE 91.7 63.9 66.3 Intersentence Task BERT-base 88.3 61.7 67.6 BERT-large 88.7 60.6 71.0 ROBERTA -base 64.4 47.4 61.0 ROBERTA -large 78.8 55.2 70.6 XLNET -base 65.0 54.6 59.0 XLNET -large 82.5 56.1 72.5 GPT2 76.3 52.3 72.8 GPT2-medium 80.5 53.5 74.9 GPT2-large 84.9 56.1 74.5 ENSEMBLE 89.4 60.9 69.9 Table 6: Performance on the Intersentence and Intrasentence CATs on the StereoSet test set, measured using likelihood-based scoring.",
"78 .",
"82 , avg lms pll = 75 .",
"98 ).",
"This suggests that psuedo-likelihood has trouble scoring longer sequences.",
"Moreover, Aribandi et al. (2021) has shown that psuedo-likelihood has higher variance than likelihood scoring.",
"Impact of pretraining corpora BERT, ROBERTA , XLNET and GPT2 are trained on 16GB, 160GB, 158GB and 40GB of text corpora.",
"Surprisingly, the corpora size does not correlate with either lms or ss .",
"This could be due to the differences in architectures and corpora types.",
"A better way to verify this would be to train the same model on increasing amounts of corpora.",
"Due to lack of computing resources, we leave this work for the community.",
"We conjecture that the high performance of GPT2 (high lms and high ss ) is due to the nature of its training data.",
"GPT2 is trained on documents linked from Reddit.",
"Since Reddit has several subreddits related to target terms in StereoSet (e.g., relationships, religion), GPT2 is likely to be exposed to contextual Model LanguageModelScore( lms ) StereotypeScore( ss ) IdealizedCATScore( icat ) Intrasentence Task BERT-base 89.6 56.9 77.3 BERT-large 88.8 58.4 74.0 ROBERTA -base 88.0 58.5 73.0 ROBERTA -large 88.1 59.6 71.2 XLNET -base 60.6 51.3 59.0 XLNET -large 61.1 53.2 57.3 GPT2 91.0 60.4 72.0 GPT2-medium 91.2 62.9 67.7 GPT2-large 91.8 63.9 66.2 ENSEMBLE 91.9 63.9 66.3 Intersentence Task BERT-base 75.0 57.2 64.1 BERT-large 73.3 57.6 62.1 ROBERTA -base 79.1 58.4 65.9 ROBERTA -large 78.7 60.0 63.1 XLNET -base 60.4 53.5 56.2 XLNET -large 61.4 54.7 55.7 GPT2 82.5 57.6 70.0 GPT2-medium 85.9 60.3 68.3 GPT2-large 87.5 61.5 67.3 ENSEMBLE 89.1 61.1 69.9 Table 7: Performance on the Intersentence and Intrasentence CATs on the StereoSet test set, measured using psuedo-likelihood scoring.",
"Domain-wise bias Table 8 shows domain-wise results of the ENSEMBLE model on the test set.",
"The model is relatively less biased on race than on others ( ss = 61.8).",
"We also show the most and least biased target terms for each domain from the development set (see Table 10 for human-agreement scores, a proxy for most and least biased terms).",
"We conjecture that the most biased terms are those that have well established stereotypes and are also frequent in language.",
"This is the case with mother (attributes: caring, cooking), software developer (attributes: geek, nerd), and Africa (attributes: poor, dark).",
"The least biased are those that do not have well established stereotypes, for example, producer and Crimean .",
"The outlier is Muslim , although it has established stereotypes indicated by the high human agreement (see Table 10).",
"This requires further investigation.",
"tence CATs on the test set.",
"Since intersentence tasks has more number of words per instance, we expect intersentence language modeling task to be harder than intrasentence, especially results computed using psuedo-likelihood (Table 7).",
"In this work, we develop the Context Association Test (CAT) to measure the stereotypical biases of pretrained language models in contrast with their language modeling ability.",
"We crowdsource StereoSet , a dataset containing 16,995 CATs to test biases in four domains: gender, profession, race and religion.",
"We show that current pretrained language models exhibit strong stereotypical biases.",
"We also find that language modeling ability correlates with the degree of stereotypical bias.",
"This dependence has to be broken if we are to achieve unbiased language models.",
"We hope that StereoSet will spur further research in evaluating and mitigating bias in language models.",
"We also note that achieving an ideal performance on StereoSet does not guarantee that a model is unbiased since bias can manifest in many ways (Gonen and Goldberg, 2019; Bender et al., 2021).",
"We would like to thank the anonymous reviewers, Yonatan Belinkov, Vivek Kulkarni, and Spandana Gella for their helpful comments in reviewing this paper.",
"This work was completed in part while MN and AB were at Intel AI."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"result",
"method",
"result",
"other",
"other"
] |
[
"Standard word embedding algorithms, such as word2vec and Glove , make a restrictive assumption that words are likely to be semantically related only if they co-occur locally within a window of fixed size.",
"However, this restrictive assumption may not capture the semantic association between words that co-occur frequently but non-locally within documents.",
"To alleviate this restriction, in this paper, we propose a graph-based word embedding method, named word-node2vec'.",
"By relaxing the strong constraint of locality, our method is able to capture both local and non-local co-occurrences.",
"Word-node2vec constructs a weighted graph, where each node represents a word and the weight of an edge between two nodes represents a combination of both local (e.g. word2vec) and document-level co-occurrences.",
"Our experiments show that word-node2vec outperforms word2vec and glove on a range of different tasks, such as word-pair similarity prediction, word analogy and concept categorization.",
"Word embedding, the process of obtaining vector representations of words, is a first step towards addressing language semantics, in which discrete entities, such as words, are embedded as vectors over a continuous space of reals.",
"This not only facilitates to obtain semantic similarities between words to improve tasks such as semantic search (Ganguly et al., 2015; Roy et al., 2016), but is also useful in a number of down-stream NLP tasks including concept categorization (Jastrzebski et al., 2017), information retrieval (Guo et al., 2016), sentence similarity prediction (Mueller and Thya-garajan, 2016), sentiment analysis (Faruqui et al., 2015) and POS tagging (Tsvetkov et al., 2016) etc.",
"Word embedding approaches such as word2vec (Mikolov et al., 2013a) and Glove (Pennington et al., 2014) rely on a large corpus to learn the association between words.",
"The architecture of existing word embedding approaches mimics the process of human cognition of word association by learning the representation of each word with an objective of maximizing the likelihood of predicting the words around its local context (defined by a fixed length word window).",
"A limitation of existing word embedding approaches, such as word2vec and glove, is that they use a strong constraint that words are likely to be semantically related to each other only if one occurs within a local context of the another, where the local context is given by a word window of specified length.",
"On the other hand, non-local or document-level co-occurrences between words have been widely used to estimate semantic similarities between words.",
"More specifically, the latent semantic analysis (LSA) method proposed by Deerwester et al. (1990) uses a spectral analysis (method of principal component analysis) of the term-document matrix of a collection to obtain the most informative concepts (word classes), and then expresses each document as a linear combination of these principal components.",
"Blei et al. (2003) estimate a generative model from a given collection by assuming that documents are mixtures of a preset number of topics, where each topic represents a word distribution over the vocabulary.",
"This is largely similar to decomposing a term-document matrix as a product of matrices with non-negative components, a process commonly known as non-negative matrix factorization (NMF) (Gaussier and Goutte, 2005).",
"The underlying common idea among all these approaches is to make use of the frequent document-level word co-occurrences to identify likely semantic association between words.",
"Despite the presence of a vast volume of literature on document-level (non-local) word co-occurrences, word embedding approaches do not utilize this information to derive the word representations.",
"In this paper, we propose to augment the document-level non-local word co-occurrence information with the local co-occurrence information that methods such as word2vec and glove use.",
"More specifically, we propose a graph-based word embedding method, named word-node2vec , that by relaxing the strong constraint of locality, is able to capture both the local and non-local co-occurrences.",
"To represent the local dependencies, each node, representative of a word (hence the name word-node'), is initialized with a vector representation obtained with a standard method, e.g. word2vec.",
"We then define the weight of the edge between a pair of word-nodes to reflect their likelihood of non-local co-occurrence, computed with the help of the global term-document matrix for the whole collection.",
"The rest of the paper is organized as follows.",
"In Section 2, we survey existing literature on word embedding.",
"In Section 3, we revisit the skip-gram approach and propose a graph-based view of the skip-gram objective as a pre-cursor to developing our model.",
"In Section 4, we extend the skip-gram graph model with non-local document-level co-occurrence information.",
"Section 5 describes our experimental setup.",
"Section 6 reports the results of our new embedding approach against a number of baselines.",
"Finally, Section 7 concludes the paper with directions for future work.",
"The word2vec (Mikolov et al., 2013a) embedding model shifts a window of a predefined size (a parameter) across the text of a collection of documents in order to train a linear classifier for each word to predict itself given its context (continu-ous bag-of-words), or its context given the word (skip-gram).",
"The parameter vector transforming a word to its context (or vice-versa) gives its embedded representation.",
"In addition to making use of the words in the context as positive samples, word2vec also relies on the use of words randomly sampled from the collection (outside the current context) as negative examples.",
"Levy and Goldberg (2014) showed that the negative sampling based skip-gram (SGNS) objective function of word2vec is mathematically equivalent to factorizing a positive point-wise mutual information gain (PPMI) matrix shifted by log ( k ) , where k is the number of negative samples.",
"The key idea behind the glove algorithm proposed in (Pennington et al., 2014) is to make use of the ratio of the co-occurrence probabilities between word pairs to better distinguish semantically related words from non-related ones.",
"The study ultimately shows that factorizing the log of the co-occurrence matrix leads to effective embedded representation of words.",
"The co-occurrences in both word2vec and glove are essentially local in nature.",
"In contrast, our proposed algorithm leverages both local and non-local co-occurrences.",
"More recently, Peters et al. (2018) proposed ELMO , a deep contextualized word representation with layers of stacked bi-directional LSTMs to model both",
"a) complex characteristics of word use (e.g., syntax and semantics), and",
"b) their diversity across various linguistic contexts.",
"A limitation of ELMO is that a word representation may effectively be learned mainly in the presence of an associated context, as a result of which the method is likely to find applications mostly in downstream tasks, e.g. question answering and sentiment analysis.",
"However, in contrast, our proposed method can learn the representation of a word in isolation, which means that, similar to word2vec and Glove, word vectors obtained using our method can be applied directly to (and is also likely to work well for) word similarity and word analogy tasks.",
"We included ELMO as of our baseline approaches in our experiments.",
"Grover and Leskovec (2016) proposed a skip-gram based objective function to embed each node of a graph.",
"Analogous to skip-gram based word embedding, each node vector is given as input to a linear classifier to predict the context vector around a node.",
"The context vector around a node, in this case, consists of a sequence of nodes visited by a random walk starting from that node.",
"In our method, we use a similar graph-based construction to train vector representations of a node (each node a word).",
"However, we use a stratified sampling approach within a maximum distance (hop-count) of 2 , instead of allowing the random walk to proceed along in a combined depth-first and breadth-first manner, as in (Grover and Leskovec, 2016).",
"Through our experiments, we find that larger hop-counts (i.e. longer transitive dependencies) introduce noise in the document-level word co-occurrence estimation process.",
"In this section, we propose a general word embedding framework based on the skip-gram objective function of word2vec.",
"Our proposed method relies on a general construction of the context around a word.",
"We modify the skip-gram objective function of word2vec to take into account this general context of words.",
"Before describing our proposed approach, we revisit the objective function of negative sampling based skip-gram word2vec (SGNS).",
"Skip-gram.",
"In word2vec, the context of a word comprises words occurring within a window of a fixed size (say k ) pivoted at a particular instance of w in the collection.",
"More formally, let ( w ) denote the set of indexes where the word w occurs in a collection C = { t 1 , . . . , t T } , T denoting the total number of tokens in the collection C , i.e. ( w ) = { i : t i = w } .",
"(1) We then construct the context c ( w ) of a word as c ( w ) = i ( w ) kj = k j (cid:54) =0 t i + j (2) Let denote the set of all observed word-context pairs ( w, c ( w )) , i.e. + = w V { w, c ( w ) } , (3) where V denotes the vocabulary set, and denote the set of negative samples of word-context pairs, i.e. = w V { w, { v : v ( V c ( w )) }} , (4) where words v 's in the negative context set are randomly sampled from the complement set c ( w ) .",
"Let y be an indicator random variable denoting semantic relatedness of a word with its context.",
"For a word w and its context c ( w ) (as defined in Equation 2, the SGNS algorithm seeks to maximize the objective function J ( ) = (cid:88) w,c ( w ) + p ( y = 1 | w , c w )+ (cid:88) w,c ( w ) p ( y = 0 | w , c w )) , (5) where p ( . ) is the log-likelihood function, and R d | V | represents the trainable matrix of parameters, each d dimensional column vector of the matrix denoting the vector representation of word w , i.e. w = w .",
"Note that the vector for a set of context words c ( w ) is obtained by some aggregation function (sum or average) over the constituent words, i.e. c ( w ) = (cid:88) u c ( w ) u .",
"In order to optimize J ( ) , the word2vec approach shifts a window of size k pivoted around a word w = t i (token positioned at offset i in the corpus), and applies stochastic gradient descent (SGD) to update the parameters for the corresponding word w and its context vector c ( w ) .",
"A Graph Formulation of SGNS.",
"We now propose a general framework that allows contexts to be defined in a more general way.",
"The solution relies on defining a graph G = ( V , E ) , where each node corresponds to a word from the vocabulary of the given collection, i.e. V = { x w : w V } .",
"In general, an edge ( x u , x v ) E represents a relation between two words u and v of weight w ( x u , x v ) R .",
"For example, in order to define the context of SGNS (Equation 2), the edge set is defined as E = { ( x w , x u ) : u i ( w ) kj = k j (cid:54) =0 t i + j } .",
"Learning the vector representations for each node of the graph G leads to learning the vector representation for each word, because there is a one-one mapping between the set of nodes V and the set of words V (henceforth we refer to a node of this general class of graphs, defined as per Equation 7, as a word-node ).",
"The objective of the embedding is to learn vector representations of nodes such that two nodes are close in the embedded space if, as per the edge relations of the graph, these nodes are within a -adjacency neighborhood of each other.",
"The -adjacency neighborhood of a graph is the set N ( x w ) = { x u V : h ( x w , x u ) } , (9) where h ( u, v ) denotes the hop-count or adjacency number between nodes u and v .",
"In the general formulation, the set of N ( x w ) , constituting the set of nodes reachable from paths of length at most k starting at x w , act as positive examples to learn the embedding of node x w .",
"This is because these positive examples seek to make the vector representation of x w similar to the vector representations of nodes in N ( x w ) .",
"More formally, + = x w V { x w , N ( x w ) } , = x w V { x w , { x u : u V N ( x w ) }} .",
"Instead of iterating over the words in a corpus, the SGNS equivalent is then achieved by iterating over the set of nodes and maximizing the same objective function of Equation 5 using the defi-nitions of the positive and negative example sets from Equation 10.",
"Note that to achieve the SGNS objective the value of is set to 1 in the defini-tion of + in Equation 10, i.e. the set of context for a word-node comprises one-hop neighbours as defined by the edge relations of Equation 8.",
"The graph based approach of Section 3 allows alternative ways to define the context and learn the objective function to obtain word-node representations.",
"In this section, we describe how to augment the non-local document-level co-occurrence information in the graph-based framework.",
"Co-occurrence Weights.",
"The first step to include non-local co-occurrences is to modify the edge relations of SGNS (Equation 8) to accommodate weighted document-level co-occurrences.",
"Instead of considering the collection C = { t 1 , . . . , t T } as a stream of words, we consider C as a set of M documents { D i } Mi =1 .",
"First, we make provision to include weighted edges of the form ( x w , x u , ( x w , x u )) in the edge construction process of Equation 8.",
"The weight ( x w , x u ) between word-nodes x w and x u is intended to represent a measure of association between these words.",
"Next, we describe how to compute the non-local co-occurrence weight between a pair of words.",
"First, we compute the co-occurrence probability of two words w and u as P ( w, u ) = (cid:80) Mi =1 I ( w, u, D i ) (cid:80) Mi =1 I ( w, D i ) (cid:80) Mi =1 I ( u, D i ) , (11) where the numerator denotes the total number of times that the words w and u co-occur in the collection of all documents, and the denominator denotes the number of times each occur independently.",
"In our approach, we use a generalized form of Equation 11, where analogous to the Jelinek-Mercer smoothing method (Ponte and Croft, 1998), we take into account the informativeness of the co-occurrences by linearly combining the frequencies with the global statistics of inverse collection frequency.",
"More specifically, P ( w, u ) = P ( w, u ) + (1 ) T 2 | ( w ) || ( u ) | , (12) where P ( w, u ) represents the maximum likelihood estimate computed by Equation 11 and the denominator denotes the product of the collection frequencies of the terms (as per the notation of Equation 1).",
"It can be seen that Equation 12 allows relative weighting of the term frequency and the informativeness components.",
"Combination with Local Co-occurrences.",
"The next step in our word-node2vec method is to augment the non-local co-occurrence information computed as per Equation 12 with the local co-occurrence of SGNS as defined in Equation 8.",
"For this, analogous to (Pennington et al., 2014), we compute the probability of co-occurrence between a word pair restricted within a window of size k over the whole collection.",
"More formally, P k ( w, u ) = 1 | ( w ) | (cid:88) i ( w ) I ( t i + j = u ) kj = k (13) Next, we assign weight to an edge by combining the local and non-local co-occurrence probabilities estimated from Equations 13 and 12 respectively.",
"Formally speaking, ( x w , x u ) = P ( w, u ) P k ( w, u ) .",
"(14)",
"Context with Weighted Edges.",
"Constructing the context of a node x w (Section 3), requires a modification aimed to take into account the edge weights while selecting the neighboring nodes of x w .",
"Instead of defining the context as the entire set of -neighborhood N ( x w ) of a node x w , we define a -neighbourhood of length (hop-count), l , which is a subset of l samples drawn from the overall neighbourhood.",
"The likelihood of sampling a node x u from the neighbourhood set is proportional to the weight of the edge ( x w , x u ) , i.e., ( x w , x u ) .",
"This way of defining the context allows the algorithm to make use of the edge weights (local and non-local co-occurrences) in learning the node representations, i.e. assigning more importance to associations with higher weights in seeking to embed the current word-node close to them.",
"Our idea, in general, is to use stratified sampling, where each stratum corresponds to a neighbourhood of particular length.",
"The priors assigned to the strata in increasing sequence of adjacency length form a decreasing sequence, which means that the most emphasis is put on direct co-occurrence evidence (i.e. the 1-adjacent neighbor-hood), than to the 2-adjacent nodes and so on.",
"Stratified sampling requires the strata to be mutually disjoint of each other.",
"This means that the -neighbourhood of Equation 9 needs to be rede-fined to ensure that any node belongs to exactly one of the partitions (defined by its hop-count).",
"To state this formally, we define the set of nodes of ( not up to ) hop-count j as H j ( x w ) = { x u : h ( x w , x u ) = j } (15) The -neighbourhood is then defined as N ( x w ) = j =1 ( H j ( x w ) j 1 j (cid:48) =1 H j (cid:48) ( x w )) .",
"(16)",
"A subset of size l , comprised of stratified samples from N ( x w ) , is then sampled with decreasing priors 1 , . . . , , i.e., j < j 1 j = 2 , . . . , and (cid:80) j =1 j = 1 .",
"Putting things together, the probability of sampling a node from the set N ( x w ) defined as per Equation 16 is then given by P ( x u | N ( x w ))= j P ( x u | H j ( x w ))= j ( x w , x u ) ( x w , . ) , (17) where ( x w , x u ) are edge weights computed with Equation 14 and ( x w , . ) denotes the sum of edges emanating from node x w .",
"As a point of note, for our experiments, we obtained optimal results by using = 2 .",
"Consequently, to simplify the description of our experiments, we name the parameter 1 as (the parameter 2 is then identical to 1 ).",
"We would also mention at this point that our proposed way of constructing the context by sampling neighboring nodes is different from the one proposed in (Grover and Leskovec, 2016), which uses a combination of breadth-first (BFS) and depth-first (DFS) traversals, with parameters p and q respectively.",
"Our experiments reveal that our sampling strategy outperforms that of Grover and Leskovec (2016) (treated as a baseline).",
"In this section, we describe our experimental setup to evaluate our new word embedding method.",
"A word embedding algorithm requires a collection to learn word representations.",
"To compare the various word embedding approaches (i.e. our method and the baselines), we use the DBPedia (2014) corpus, which is a collection of abstracts of Wikipedia pages crawled in 2014 1 .",
"Dataset characteristics are outlined in Table 1.",
"As part of preprocessing, we removed words with collection frequency less than 10 and also removed stopwords 2 .",
"The objective of our experiments is two-fold.",
"First, to show that a combination of local and global approaches is likely to yield effective embedded representations of word vectors, and second that our proposed graph-based formalism is likely to work better than a trivial black-box way of combining the two sources of information.",
"Local Co-occurrence approaches.",
"As approaches that use local co-occurrence information, we use three state-of-the-art embedding approaches namely skip-gram word2vec with negative sampling (SGNS) (Mikolov et al., 2013a), Glove (Pennington et al., 2014) and Fasttext (Joulin et al., 2016).",
"All these methods rely only on co-occurrences (at the level of words for the first two and at the level of character n-grams for the last one) within a word or character n-gram window of specified length k (acting as a param-eter).",
"Fasttext learns the vector representation of each word by aggregating (vector sum) the vector representations of its constituent n-grams.",
"Additionally, we also employ a more recent approach, namely ELMO (Peters et al., 2018), which relies on a pre-trained model (comprised of stacked bidirectional LSTMs) to infer vectors for a given context (typically a sequence of words).",
"For our experiments, Document-level Co-occurrence approaches.",
"Although not an embedding approach, the LDA topic modeling algorithm outputs two matrices, namely RM d and R d V , representing the document-topic and topic-words distribution respectively (Blei et al., 2003).",
"LDA uses document-level word co-occurrences to estimate both these matrices.",
"In principle, one can then use the matrix as a substitute for the word embedding parameter matrix of SGNS (see Equation 5).",
"This gives d dimensional vectors for each word purely with a global co-occurrence based approach.",
"Although it is possible to choose other non-local co-occurrence approaches as baselines, e.g. PLSA (Hofmann, 1999) or LSA, (Deerwester et al., 1990), it was shown in (Blei et al., 2003) that LDA outperforms each of these.",
"Consequently, we use the stronger baseline of LDA in our experiments.",
"Combination of Local and Non-local Co-occurrences.",
"To empirically demonstrate the effectiveness of our proposed graph-based word-node embedding, we employ an additional baseline that is a linear combination of the word vectors obtained individually with the local and nonlocal approaches.",
"More formally, the vector of each word w is given as w = w Local + (1 ) w LDA , (18) where w Local is the vector representation of word w obtained by a local co-occurrence baseline, i.e. SGNS and Glove, whereas w LDA represents the vector for the word w obtained with LDA.",
"Additionally, we employ the node2vec approach as a baseline.",
"In particular, we use node2vec to learn the word-node representations of the graph constructed as per Section 4.",
"The purpose of this baseline is to show that our way of defining the contexts around word-nodes is more suitable for our task of word embedding than a general-purpose graph node embedding approach.",
"datasets, each corresponding to one of the following three evaluation tasks.",
"Word Similarity.",
"A standard way to measure the effectiveness of embedded words is to measure how well the similarity between a pair of words correlates with human judgments.",
"Two such standard datasets that we use for our experiments are the WSIM-353 (Finkelstein et al., 2014) and the MEN (Bruni et al., 2014) datasets.",
"Both comprise a list of word pairs, with an associated human judged similarity value.",
"This similarity value is expected to be high for semantically similar words, such as morning' and sunrise' (human assigned score of 49 out of 50), and low for semantically unrelated words, such as angel' and gaso-line' (score of 1 out of 50), both examples being taken from the MEN dataset.",
"Word Analogy.",
"The word analogy task consists of templates of the form A:B as C:X, where A, B, and C are given words, whereas X is unknown.",
"Using a vector representation of words this analogy task is solved by retrieving the vector most similar to that of B + C A .",
"A word embedding is considered effective if it finds a greater number of correct answers (resulting in higher accuracy).",
"We employed three different analogy datasets, namely, the Google Analogy (Mikolov et al., 2013a), the MSR Analogy (Mikolov et al., 2013b) and the SemEval-2012 task 2 (Jurgens et al., 2012) datasets.",
"The MSR dataset contains syntactic questions only involving morphological variations.",
"The Google dataset on the other hand contains both syntactic and semantic questions.",
"Given an analogy A:B as C:D', the Semeval-2012 task requires prediction of the degree to which the semantic relations between A and B are similar to those between C and D. In our experiments, we treat the given entity D as unknown and seek to predict D, similar to the MSR and Google analogy datasets.",
"Table 2 provides an overview of examples from these datasets.",
"a concept type derived from an ontology.",
"For this task, we employ the AP (Almuhareb and Poe-sio, 2005), BLESS (Baroni and Lenci, 2011) and ESSL 2 b (Marco Baroni and Lenci, 2008) datasets.",
"The AP dataset contains 402 nouns from 21 WordNet classes, e.g., nouns such as ceremony', feast', and graduation' belong to the class So-cial Occasion'.",
"The BLESS dataset, designed for the evaluation of distributional semantic models, contains 200 distinct English concrete nouns as target concepts.",
"These nouns are categorized into 17 broad classes.",
"Evaluation Metrics and Pipeline.",
"The word similarity prediction effectiveness is measured with the help of Spearman's rank correlation co-efficient .",
"This measures the rank correlation (higher is better) between the list of word pairs sorted in decreasing order of inter-similarity values as predicted by a word embedding algorithm and the reference list of human judged word pairs.",
"For the analogy and the concept categorization tasks, we report the accuracy in predicting the reference word and that of the class, respectively.",
"Parameters and Settings.",
"In our experiments, for all the methods, except ELMO, we set the number of dimensions to 200 .",
"To find optimal settings for each method (except ELMO), we use the MEN dataset as a development set for tuning the parameters of each method.",
"Each method with the optimal parameter settings is then applied for the rest of the datasets and tasks.",
"Since we used a pre-trained model for ELMO, the number of dimensions corresponds to the size of the output layer of the network, the value of which in the default configuration of the Python implementation 3 is 1024 .",
"The parameters of SGNS are window size ( k ) and the number of negative samples (NS).",
"For the baseline approach SGNS, we varied k from 5 to 40 in steps of 5 and found that the best results are obtained when k = 10 and NS = 5 .",
"Similarly, for Glove we chose the optimal settings by varying k within the same range of [5 , 40] and found that the optimal for the MEN dataset is obtained for k = 20 .",
"We obtain the LDA results by setting the number of topics to 200 (so as to match with the dimensionality).",
"As LDA hyper-parameters, we use settings as prescribed in 3 https://github.com/allenai/allennlp/ blob/master/tutorials/how_to/elmo.md Method Spearman's MEN WSIM SGNS ( k = 10 , NS = 5 ) 0.7432 0.6977 Glove ( k = 20 ) 0.7066 0.6706 FastText 0.7307 0.6518 ELMO 0.4225 0.4631 LDA 0.4933 0.4074 SGNS-LDA ( = 0 . 9 ) 0.7367 0.6548 Node2vec ( p = 0 . 5 , q = 0 . 5 , l = 40 ) 0.7440 0.6988 Word-node2vec ( = 0 . 5 , = 0 . 7 , l = 20 ) 0.7491 0.7032 Table 3: Word similarity prediction results.",
"(Griffiths and Steyvers, 2004), i.e., = 0 .",
"1 and = 0 .",
"25 ( 50 / (# topics = 200) ).",
"Since we found that SGNS performed significantly better than Glove, we use SGNS vectors for the linear combination method (Equation 1), which we call SGNS-LDA from hereon.",
"The parameter was varied within a range of [0 . 1 , 0 . 9] in steps of 0 .",
"1 ( = 0 and = 1 degenerate to that of LDA and SGNS respectively).",
"We found that the best results are obtained for = 0 .",
"9 .",
"For node2vec baseline approach of word-node embedding, we varied the parameters p and q (BFS and DFS parameters) within a range of [0 . 1 , 5] and found that the best results on the MEN dataset are given for p = 1 and q = 1 (Grover and Leskovec, 2016).",
"Another parameter in node2vec is the random walk length, l , for which the optimal value was found to be 80 .",
"For word-node2vec, in addition to window size ( k ) and number of negative samples ( NS ), three more parameters are:",
"i) , i.e., the importance of the presence of a term relative to its informativeness (Equation 12,",
"ii) , the prior assigned to sampling from the 1-adjacent neighborhood, and",
"iii) the size of the context sampled from the neighborhood, l (this is analogous to the random walk length parameter of node2vec).",
"Instead of separately optimizing the parameters common to SGNS, we directly use the optimal values of k = 10 and NS = 5 for word-node2vec.",
"The optimal results of the additional parameters, tuned on the MEN dataset, are shown in Table 3.",
"Word Similarity Prediction.",
"Table 3 shows the results obtained by the competing methods on the word similarity prediction task.",
"It can be seen that Glove turns out to be relatively ineffective in modeling the semantic representations of words 0.685 0.690 0.695 0.700 0.705 0.710 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.663 0.673 0.683 0.693 0.703 0.713 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.270 0.275 0.280 0.285 0.290 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 P @1 0.265 0.270 0.275 0.280 0.285 0.290 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 P @1 Figure 1: Parameter sensitivity of word-node2vec on word prediction (left column) and word analogy (right column) tasks using WSIM (top row) and MSR (bottom row) datasets.",
"as compared to human judgments.",
"SGNS performs significantly better and the settings trained on MEN dataset generalize well on the WSIM-353 dataset as well.",
"LDA performs rather poorly indicating that only global co-occurrences can lead to noisy representations of words.",
"FastText performs worse as compared to SGNS.",
"It is worth mentioning that the performance of ELMO is disappointing on this task of semantic similarity prediction, because of the most likely reason that it better learns vector representations of word in the presence of a context.",
"A linear combination of SGNS and LDA (Equa-tion 1 with = 0 . 9 ) does not perform better than SGNS, which means that a simple way of combining the embedded representations obtained individually with local and non-local approaches does not work well.",
"The node2vec approach of embedding nodes of the word-nodes graph constructed as per the description of Section 4 relies on a random walk based construction of the context of a word node.",
"This random walk based context construction is only able to improve the SGNS results slightly, indicating that random walks can introduce noise in the contexts of word-nodes.",
"The word-node based graph construction (in-corporating local and non-local co-occurrences in a principled way) works particularly well in conjunction with the stratified sampling based approach of selecting context words from the neighborhood.",
"The optimal value of = 0 .",
"5 suggests that document-level co-occurrences should be computed by assigning equal importance to term presence and informativeness.",
"A value of = 0 .",
"7 confirms the hypothesis that more emphasis should be put on direct co-occurrences.",
"Word Analogy and Concept Categorization.",
"Similar trends are observed in the word analogy and concept categorization tasks in Tables 4 and 5 respectively.",
"Relatively higher improve-Rank SGNS word-node2vec 1 albums 0.929 albums 0.926 2 selftitled 0.885 selftitled 0.883 3 rerecorded 0.868 rerecorded 0.863 4 promotional 0.815 released 0.852 5 reissue 0.790 song 0.810 Table 6: Nearest neighbors of the word album' obtained by SGNS and word-node2vec.",
"ments with word-node2vec are noted for the MSR analogy task (comprised of syntactic categories).",
"Among the baseline approaches, both node2vec and SGNS-LDA work well on the concept categorization task.",
"However, the performance improvements are inconsistent across datasets, e.g. SGNS-LDA performs well on ESSLI 2 b and poorly on AP.",
"Our proposed method configured on the MEN dataset works consistently well across all datasets, which indicates that word-node2vec can generalize well for different tasks.",
"As a side observation, we note that ELMO performs well for the analogy and concept categorization tasks (yielding the best results in particular on the Google analogy dataset).",
"Although the results are not directly comparable because of differences in the dimensionality of the vectors and also in the collection of documents used in the pre-trained ELMO vectors (Billion word benchmark as against DBPedia in our case), it could possibly be reasoned that the additional contextual information of the ELMO vectors turns out to be useful for in the analogy task.",
"Embedding Examples.",
"Table 6 shows an example of the change in the neighbourhood of a sample word in the embedded space obtained by SGNS and word-node2vec.",
"It can be seen from the table that word-node2vec is able to push relevant words, such as released' and song' within the top 5-NN of the word album'.",
"Although the words promotional' and reissue' are related to album', the semantic association of released' and song' with album' is apparently higher.",
"We found that the word song' occurs in the local context of the word album' only 133 , 494 number of times out of a total number of 177 , 487 instances of the word album'.",
"This means that a significant percentage of times (almost 25% ), song' co-occurs with album' at a document-level.",
"Our embedding algorithm is able to leverage this information by making the vector for song' closer to album'.",
"Sensitivity Analysis.",
"Tables 3-5 show word-node2vec results with optimal parameter settings.",
"We now investigate the effect of varying these parameters on each individual evaluation task.",
"We observe that both term presence and term informativeness are important to model document-level co-occurrences as seen from the fact that the and accuracy values decrease as gets close to 0 or 1 (the 1st and 3rd plots from the left of Figure 1).",
"Similarly, it can be seen that the results tend to improve with higher values of , which confirms that direct associations between words in the word-node graph are more important than transitive ones (2nd plot from the left and the rightmost plot of Figure 1).",
"However, second-order transitive associations are still important because the results tend to decrease for close to 1.",
"We proposed a word embedding approach that leverages document-level non-local co-occurrences, in addition to the window-based local co-occurrences.",
"We proposed a graph-based framework, in which words are represented as nodes and the edges between a pair of words reflect the degree of association between them.",
"This association is a function of both the local and the document-level co-occurrences, which enables our approach to achieve the best of both worlds' in word embedding.",
"Experiments show that our proposed method outperforms local approaches, namely word2vec, Glove and FastText, on a number of different tasks.",
"Our approach also outperforms a naive black-box combination of embed-dings obtained separately by local and document-level approaches.",
"This proves the importance of addressing both these sources of information jointly in an embedding objective.",
"In future, we would like to explore ways of applying a similar graph based formalism for learning vectors for documents.",
"This work was supported by Science Foundation Ireland as part of the ADAPT Centre (Grant No. 13/RC/2106) ( www.adaptcentre.ie ).",
"This work started as an internship during the first au-thor's visit to IBM Research Lab, Ireland."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"method",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"method",
"objective",
"other",
"method",
"other",
"method",
"other",
"other",
"other",
"objective",
"method",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"result",
"abstain",
"objective",
"other",
"other"
] |
[
"Text style transfer aims to controllably generate text with targeted stylistic changes while maintaining core meaning from the source sentence constant.",
"Many of the existing style transfer benchmarks primarily focus on individual high-level semantic changes ( e.g. positive to negative), which enable controllability at a high level but do not offer fine-grained control involving sentence structure, emphasis, and content of the sentence.",
"In this paper, we introduce a large-scale benchmark, STYLEPTB, with (1) paired sentences undergoing 21 fine-grained stylistic changes spanning atomic lexical, syntactic, semantic, and thematic transfers of text, as well as (2) compositions of multiple transfers which allow modeling of fine-grained stylistic changes as building blocks for more complex, high-level transfers.",
"By benchmarking existing methods on STYLEPTB, we find that they struggle to model fine-grained changes and have an even more difficult time composing multiple styles.",
"As a result, STYLEPTB brings novel challenges that we hope will encourage future research in controllable text style transfer, compositional models, and learning disentangled representations.",
"Solving these challenges would present important steps towards controllable text generation.",
"At the heart of interactive AI systems lies the element of communication as a channel to convey intentions using different stylistic attributes.",
"Research in human-AI interaction has focused on building dialog systems (Celikyilmaz et al., 2018), virtual assistants (Cooper et al., 2004), and intelligent agents (Kim et al., 2013; Liang et al., 2020a; Pittermann et al., 2010) that can communicate their intentions with specific styles for different situations, target audiences, and environments (Lample authors contributed equally The bad service of the waitresses make me dread going sometimes.",
"et al., 2019; Li et al., 2018).",
"For example, expressing the same facts using either formal or informal styles can be more suitable for certain target audiences (Rao and Tetreault, 2018).",
"What is a style in natural languages?",
"Existing style transfer benchmarks primarily focus on individual high-level stylistic changes across sentiment (Shen et al., 2017), formality (Rao and Tetreault, 2018), politeness (Madaan et al., 2020), and writing styles (Jhamtani et al., 2017).",
"Figure 1 provides some motivating examples to show that the high-level style transfers as commonly studied in existing benchmarks ( e.g. Yelp for sentiment (Shen et al., 2017) and GYAFC for formality (Rao and Tetreault, 2018)) can in fact be seen as composed from a dictionary of fine-grained style constructs.",
"This alternative way of studying styles brings additional flexibility that enables fine-grained control with the possibility to compose a broader space of styles spanning tense, sentence structure, phrase emphasis, and information contained in the sentence.",
"However, the missing link is a benchmark dataset that offers this type of fine-2117 grained style constructs, with the controllability to compose these stylistic transfers.",
"2 Related Work Several lines of research have aimed to formalize styles in natural languages through computational and linguistic perspectives (DiMarco and Hirst, 1993).",
"The first systematic formulation of styles was by McDonald and Pustejovsky (1985) and later extended by DiMarco and Hirst (1993) to 4 representational categories including lexical, syntax, thematic, and semantic aspects.",
"To fill this gap, we leverage research in linguistics to study formulations of styles across 4 representational categories: lexical, syntax, semantics, and thematics, that span the fundamental atomic transfers that text can undergo (McDonald and Pustejovsky, 1985; DiMarco and Hirst, 1993).",
"Using these insights, we introduce a large-scale benchmark with (1) paired sentences undergoing 21 fine-grained stylistic changes spanning the most atomic lexical, syntactic, semantic, and thematic style constructs, as well as (2) compositions of multiple transfers which model how fine-grained style constructs compose to form more complex, high-level transfers.",
"Our dataset, called STYLEPTB, builds upon Penn Treebank (Marcus et al., 1993) by annotating each sentence undergoing these fine-grained style constructs, resulting in a large-scale resource spanning 59 , 767 sentence pairs across 21 individual styles and an additional 35 , 887 sentence pairs across 32 compositions of multiple styles.",
"STYLEPTB allows us to study the performance of state-of-the-art style transfer models when faced with the new challenge of fine-grained style transfer.",
"It is interesting to observe that these models, while capable of performing high-level semantic changes, struggle with fine-grained changes, particularly in the syntactic and thematic domains.",
"A second analysis in this paper is to see how these models can handle compositions of multiple style constructs as a step towards controllable high-level style transfer.",
"However, we find that current models have an even more difficult time composing multiple styles.",
"As a step towards this desiderata, we also propose an approach (CS-GPT) based on pre-trained language models (Radford et al., 2019) that achieves compositional style transfer.",
"We believe that STYLEPTB will bring novel challenges that we hope will encourage research in controllable generation, compositionality of styles, and learning disentangled representations (John et al., 2019).",
"From a broader perspective, we conclude with the observation that controllable style transfer models trained on STYLEPTB can help mitigate social biases in pre-trained language models.",
"Following this, there has been some early efforts applying stylistic analysis into dialog generation (Hovy, 1987), machine translation (DiMarco, 1994), and text generation (Gatt and Krahmer, 2018).",
"We take advantage of this prior work when formalizing our new STYLEPTB dataset.",
"Current benchmarks for style transfer focus on high-level style definitions such as transfer of sentiment (Shen et al., 2017; Lample et al., 2019; Li et al., 2018; Wu et al., 2019), politeness (Madaan et al., 2020), formality (Rao and Tetreault, 2018; Liu et al., 2020; Krishna et al., 2020), writing styles (Jhamtani et al., 2017; Syed et al., 2020; Jin et al., 2020) and some other styles (Kang and Hovy, 2019).",
"However, these only focus on only high-level styles, unlike STYLEPTB.",
"Computational models for style transfer span statistical NLP methods (Hovy, 1987; Xu et al., 2012), neural generative models (Prabhumoye et al., 2018; Lample et al., 2019; He et al., 2020), and Retrieve-and-Edit approaches (Li et al., 2018; Hashimoto et al., 2018; Guu et al., 2018; Sud-hakar et al., 2019; Madaan et al., 2020).",
"These approaches work for a predefined set of styles but are unable to generalize to compositions of styles.",
"Evaluating style transfer is difficult due to the diversity of plausible transferred sentences.",
"In addition to automatic scores such as BLEU, perplexity, or binary classification accuracy of style transfer (Hu et al., 2017; Lample et al., 2019; He et al., 2020), other automatic metrics (Fu et al., 2018; Mir et al., 2019) and human evaluation are also commonly used (Li et al., 2018; Shen et al., 2017).",
"As a step towards enabling fine-grained control with the possibility to compose a broader space of styles, we first define style constructs at fine-grained levels spanning lexical, syntactic, semantic, and thematic aspects.",
"When selecting these style constructs, we have 2 goals in mind: (1) they should be representative of the four aspects (lexical, syntactic, semantic, thematic) following the formal categorizations in DiMarco and Hirst (1993), and (2) the transfers should be consistent ( i.e. well-defined such that if multiple annotators are asked to modify the same sentence, the results will be sim-ilar).",
"With these goals in mind, we summarize the 2118 Aspect Transfer Original Sentence Additional Info/ Emphasis Transferred Sentence LEXICAL Noun synonym replacementThe shift wo n't affect operations.",
"Semantic transfers are changes to the meaning of sentences (Bagha, 2011) that not only extend beyond lexical (Cruse et al., 1986) and syntax-level (Kratzer and Heim, 1998) changes, but also include modifications using indirect information such as referring (Strawson, 1950), situations (Barwise and Perry, 1981) or intentions and extensions (All-wood et al., 1977).",
"As a starting point, we defined two simple types of semantic transfers: (1) Info removal: 3 transfers on different deletions: word-level (removing adjectives and adverbs), phrase 2119 435 594 735 928 1051 1297 1247 1432 0 200 400 600 800 1000 1200 1400 1600 5 6 7 8 9 10 11 12 Adjectives8% Nouns31% Verbs19% Determiners10% Adverbs6% Others26% 500 417408403 308 257241203197176174173145145138137135123119113113105 94 91 90 90 89 88 86 85 0 100 200 300 400 500 600 m r .",
"following 21 chosen fine-grained style constructs spanning 4 categories and also provide detailed examples in Table 1. Lexical transfers are those at fine-grained lexicon levels ( i.e. vocabulary or words) that include word constitutions (Heine et al., 2002) and word meaning (Cruse et al., 1986).",
"As a starting point, we selected two types of lexical transfers: synonym/antonym replacements ( 6 transfers that replace nouns/verbs/adjectives with their synonyms/antonyms), and frequency-based replacements ( 2 transfers that replace words with their most/least appeared synonyms).",
"The syn-onym/antonym resources are taken from Wordnet (Fellbaum, 2012).",
"sentences (Chomsky, 2002) without affecting the content (Akmajian and Heny, 1980).",
"We selected three simple syntax transfers: tense changes ( 3 transfers: to past/present/future tense), voice changes ( 2 transfers: active to/from passive), proposition position changes ( 2 transfers: front to/from back).",
"level (removing propositions), and substatement level (removing entire substatements) that represent referring and situations , as well as (2) Info addition: 1 transformation that adds a given piece of information regarding a particular phrase in the current sentence representing extension .",
"Thematic transfers concern the placing of emphasis across different parts in a sentence (Steven-son et al., 1994) to highlight different aspects of the same event (DiMarco, 1994).",
"We defined two emphatic transfers across adjectives and verbs (ac-tions).",
"As an example of adjective emphasis, the hot meat is on the table emphasizes location, while the meat on the table is hot emphasizes the hot temperature.",
"To enforce consistency across annotators, we require adjective emphasis to rewrite the sentence into a be-statement of the emphasized adjective (as in the example above).",
"Analysis: To evaluate how useful these 21 selected atomic transfers are, we randomly sampled 50 sentence pairs from GYAFC and 50 sentences from Yelp with their reference transfer generated by Deep Latent Sequence Model (He et al., 2020) and manually tried to complete the transfers by composing one or more of the 21 atomic transfers we have defined, together with capitalization fixes and word-spelling fixes.",
"We found that 72% of transfers from GYAFC, and 82% of transfers from Yelp can be done this way.",
"Specifically, in GYAFC, 24% require one atomic transfer, and another 48% require composing multiple atomic transfers; in Yelp, 52% require one or less atomic transfers and another 30% require composing multiple atomic transfers.",
"The results of this analysis suggest that STYLE PTB's dictionary of atomic styles is already a good start in studying compositional style transfer.",
"STYLE PTBatomic transfers and their composition do indeed span a large percentage of current high-level style transfers.",
"Using these selected 21 style constructs, we now illustrate the steps towards collecting and annotating parallel sentences across style transfers.",
"We use Penn Treebank (PTB) (Marcus et al., 1993) as our source of sentences.",
"Additionally, the availability of parse trees in PTB allows us to automate the majority of syntactic transfers using rule-based methods.",
"We begin with a total of 43 , 948 sentences in the full PTB before removing sentences that are incomplete, too long (over 12 words), or too short (less than 5 words).",
"This leaves 7 , 719 sentences (see Figure 2 for statistics and Appendix A.1 for full details).",
"Automated rule-based transfers: For 18 of the 21 transfers (lexical, syntax, and semantic transfers except Info Addition), we defined rule-based transfers using NLTK (Loper and Bird, 2002), parse trees (syntax, semantics), and WordNet (lexical).",
"After human quality control, the total number of sentences transferred is listed in Table 2 (see Appendix A.2 for more details on automated generation and Appendix A.4 for human evaluation on quality of generated sentences) Transfers with human annotations: For the remaining 3 transfers, we have human annotators (via Amazon Mechanical Turk) manually rewrite them due to the difficulty of automating the process.",
"See Appendix A.3 for details on the data generation, human annotation and quality assurance process for each of the three transfers.",
"After annotations and quality control, we obtained 696 rewritten sentences for adjective emphasis, 1201 rewritten sentences for verb emphasis, and 2114 2120 Category Transfer Number Lexical Noun synonym replacement 5948 Noun antonym replacement 2227 Verb synonym replacement 2574 Verb antonym replacement 1284 ADJ synonym replacement 434 ADJ antonym replacement 1146 Most frequent synonym replacement 4722 Least frequent synonym replacement 7112 Syntax To future tense 7272 To present tense 4365 To past tense 4422 Active passive 2808 PP front back 467 Semantics(informationdeletion) ADJ or ADV removal 4863 PP removal 4767 Substatement removal 1345 Information Addition 2114 Thematic Verb/action emphasis 1201 Adj emphasis 696 Table 2: STYLEPTB is a large-scale resource spanning 59 , 767 sentence pairs across 21 individual styles.",
"valid sentence-information pairs with their transferred sentence with information added.",
"Lexical transfers can be done by replacing individual words and is simple to evaluate.",
"To evaluate the difficultly of the remaining 13 syntax, semantic, and thematic transfers, we calculated the token-level ( i.e. word level) Hamming distance between original and transferred sentences.",
"Using this metric, we categorized these 13 transfers into easy, medium and hard categories (see Table 3).",
"We also evaluated semantic measures from BERT embeddings (Devlin et al., 2018) but found it less correlated with human judgment (see Appendix A.5).",
"To allow for compositionality, we also generated compositional data that includes parallel pairs of sentences linked by multiple sequential transfers.",
"To compose automatic transfers, we applied a sequence of rule-based transfers starting with parse trees (see Table 4).",
"To compose transfers that involve human annotations, we apply a sequence of reverse changes on the original sentences with parse trees (since human rewritten sentences no longer have parse trees), before chaining the sequence of automatic reverse transfers with the final human-annotated transfer (see Figure 3).",
"We extend the pre-trained GPT2 language model (Radford et al., 2019) for parallel style transfer by giving it designated style transfer tokens as input in addition to the source sentence.",
"For example, for each individual binary style s i , we define a style transfer token i { 0 , 1 , 2 } where i = 0 represents keeping s i unchanged, i = 1 represents a change from s i = 0 to s i = 1 , and vice versa for i = 2 .",
"We likewise extend the definition of i for styles taking more than 2 values.",
"Given a parallel (source, target) pair ( s, t ) , we define the appropriate transfer token { 0 , 1 , 2 } 2121 Easy Transfers Baseline Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE_L CiDER To Future Tense GPT2 0.895 0.852 0.813 0.778 0.540 0.899 7.709 SEQ 2 SEQ 0.527 0.368 0.261 0.188 0.173 0.531 1.525 RETRIEVEEDIT 0.899 0.854 0.815 0.778 0.531 0.901 7.731 HUMAN 0.954 0.915 0.884 0.855 0.636 0.964 9.174 ADJ or ADV Removal GPT2 0.647 0.508 0.394 0.308 0.313 0.652 3.259 SEQ 2 SEQ 0.450 0.274 0.172 0.112 0.140 0.469 1.171 RETRIEVEEDIT 0.897 0.841 0.786 0.731 0.511 0.919 7.461 HUMAN 0.933 0.894 0.870 0.847 0.591 0.965 8.924 Medium Transfers Baseline Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE_L CiDER Substatement Removal GPT2 0.430 0.332 0.247 0.176 0.250 0.588 3.090 SEQ 2 SEQ 0.317 0.192 0.110 0.001 0.100 0.368 1.041 RETRIEVEEDIT 0.706 0.678 0.647 0.607 0.405 0.767 6.183 HUMAN 0.731 0.720 0.705 0.685 0.607 0.788 7.691 Information Addition GPT2 0.479 0.305 0.189 0.121 0.207 0.475 1.359 SEQ 2 SEQ 0.345 0.180 0.094 0.053 0.098 0.335 0.632 RETRIEVEEDIT 0.493 0.396 0.328 0.275 0.284 0.603 3.401 HUMAN 0.846 0.762 0.690 0.624 0.521 0.892 6.863 Hard Transfers Baseline Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE_L CiDER Active To Passive GPT2 0.476 0.329 0.238 0.189 0.216 0.464 1.820 SEQ 2 SEQ 0.373 0.220 0.141 0.103 0.131 0.345 0.845 RETRIEVEEDIT 0.681 0.598 0.503 0.427 0.383 0.663 4.535 HUMAN 0.931 0.881 0.835 0.795 0.587 0.905 8.603 Adjective Emphasis GPT2 0.263 0.079 0.028 0.000 0.112 0.188 0.386 SEQ 2 SEQ 0.187 0.058 0.018 0.000 0.059 0.179 0.141 RETRIEVEEDIT 0.387 0.276 0.211 0.164 0.193 0.369 1.679 HUMAN 0.834 0.753 0.679 0.611 0.522 0.811 6.796 Verb/Action Emphasis GPT2 0.309 0.170 0.095 0.041 0.140 0.292 0.593 SEQ 2 SEQ 0.289 0.127 0.066 0.038 0.098 0.275 0.300 RETRIEVEEDIT 0.416 0.284 0.209 0.148 0.223 0.423 1.778 HUMAN 0.649 0.569 0.493 0.421 0.433 0.693 5.668 Table 5: Evaluation results on easy (top), medium (middle), and hard (bottom) transfers.",
"6 Experiments We test the performance of current style transfer models on STYLEPTB.",
"Anonymized data and code is included in the supplementary, and we present extra details and results in Appendix B and C. 2122 6.1 Datasets and Metrics We use STYLEPTB and evaluate on the 13 non-lexical transfers (since lexical changes works best with fixed word substitutions).",
"and train using maximum likelihood estimation to predict every word t j , for j = 1 , 2 , . . . , T , in the target sentence given the source and : = arg max E ( s,t )D T j = 1 log p ( t j ; s, ) , (1) where denotes the pre-trained GPT2 parameters and denotes the parameters after fine-tuning on STYLEPTB.",
"Note that we also train the model to reconstruct the same source sentence again when setting = 0 (no style change), which we found to help bridge the domain shift between data used to pre-train GPT2 and sentences in STYLEPTB.",
"As a step towards compositionality , we also train with (source, target) pairs that undergo multiple atomic style transfers as provided in STYLEPTB, resulting in multiple style transfer tokens i being activated at the same time.",
"We call the resulting model CS-GPT (Compositional Style GPT) and show its architecture in Figure 4. Learning separate representations for each i results in disentangled style variables that can then be composed as desired.",
"Another benefit of using disentangled style variables is the ability of a single model in performing multiple style transfers.",
"6.3 Results and Observations We evaluate these 3 baseline models on the style transfers in STYLEPTB and show results in Table 5. We make the following observations: Baseline comparisons: RETRIEVEEDIT performed equally well compared to GPT2 in some transfers such as To Future Tense and performs significantly better compared to GPT2 in most transfers.",
"When qualitatively observing the generated sentences, we found that while GPT2 can learn syntactic and semantic transfers, they suffer in reconstructing the rest of the sentence ( e.g. making word repetitions).",
"This was not an issue Clarity Context Style GPT2 1 .",
"Please refer to Appendix B.1 for dataset preprocessing details.",
"Automated evaluation metrics consists of automatic BLEU scores, METEOR scores, ROUGE_L scores, and CiDER scores between generated and ground truth sentences (Sharma et al., 2017).",
"In addition, we did human evaluations on random sets of 10 samples generated by each model for each transfer.",
"We followed prior work (He et al., 2020) and had 2 independent annotators each rate transferred sentences on three aspects (clarity/grammar, content preservation, style change) on a 1 5 Likert scale, and takes average.",
"We evaluate the following baselines commonly used in style transfer.",
"Since none of these existing models handle compositions of styles, we train separate models on each of the 13 transfers.",
"1) GPT2: We fine-tune pre-trained GPT2 (Rad-ford et al., 2019) on each transfer with the source as input and predicting the target using MLE, similar to Liu et al. (2020); Syed et al. (2020).",
"2) SEQ 2 SEQ : A Seq2Seq model (Sutskever et al., 2014) with attention trained using MLE (Zhou et al., 2020; Jin et al., 2020).",
"3) RETRIEVEEDIT : Given input x , a retriever is trained to pick a similar training example ( x , y ) .",
"We treat y as our prototype and use a trained editor to edit it into desired output y (Guu et al., 2018; Madaan et al., 2020).",
"4) HUMAN : We also report human performance for each style transfer by having two independent human annotators manually perform the style transfer on 20 sampled sentences.",
"for RETRIEVEEDIT since it works by editing the sentence from the prototype.",
"Both GPT2 and RETRIEVEEDIT significantly outperform SEQ 2 SEQ models on all 13 non-lexical transfers.",
"Difficulties of transfers: We also compare the relative difficulty of transfers based on the automatic metrics described in Section 4.3.",
"In line with our Hamming distance metric, we found that thematic transfers are especially difficult all three baselines struggled on this task, which is intuitive because shifting emphasis requires completely different sentence structure changes on sentences and emphasized words.",
"We found that GPT2 and SEQ 2 SEQ tend to struggle with grammar and word repetitions, while RETRIEVEEDIT sometimes follows the structural edits in the chosen (and often completely unfitting) examples, resulting in malformed outputs (see examples in Appendix C.1).",
"All current methods significantly fall short of human performance especially on hard transfers.",
"Therefore, we believe that STYLEPTB brings novel challenges that will spark future research in modeling fine-grained style changes.",
"Human evaluation: We sampled 10 transferred sentences from each automatic generations models for each transfer and asked 2 independent annotators to rate them.",
"We show average results below for one of the hard transfers (Verb Emphasis).",
"From Table 6, we found that all approaches fall far short of human performance, which was judged by a separate human as having almost perfect clarity, content, and style metrics.",
"Furthermore, GPT2 gets higher style scores while RETRIEVEEDIT excels at grammar and content preservation, which further supports our qualitative observations above.",
"Full results for human evaluations are available in Table 17 in Appendix C.1.",
"2123 Transfers Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE_L CiDER ToFuture+ActiveToPassive GPT2 0.391 0.222 0.120 0.065 0.167 0.373 0.866 CS-GPT-ZERO 0.419 0.243 0.114 0.047 0.209 0.325 1.238 CS-GPT 0.496 0.340 0.240 0.185 0.217 0.479 1.800 ToPast+PPRemoval GPT2 0.714 0.640 0.573 0.510 0.374 0.724 5.152 CS-GPT-ZERO 0.542 0.389 0.268 0.182 0.314 0.535 2.103 CS-GPT 0.772 0.695 0.624 0.564 0.421 0.775 5.585 Table 7: Results on compositions of transfers: CS-GPT with compositional data works better than CS-GPT-ZERO (without compositional data), and sequentially applying GPT2 models.",
"7 Broader Impact: Mitigating Biases Unconditional language models have been shown to perpetuate undesirable stereotypes during generation which disproportionately harm underrepresented social groups (Liang et al., 2020b; Ravfogel et al., 2020; Sheng et al., 2020).",
"As one possible application of fine-grained style transfer (in addition to many others), we hypothesize that more fine-grained control over the generated outputs can 2124 Transfer To Future + Passive To Active To Past + PP Removal Source Sentence NUM % was risen by sales to NUM billion from NUM billion.",
"As a step towards learning compositional transfers,",
"we implemented the following baselines: 1. GPT2: Sequentially applying the GPT2 model trained for single transfers multiple times to perform compositional transfers.",
"2. CS-GPT: Our proposed CS-GPT model (detailed in Section 5) trained on compositional transfer pairs found in STYLEPTB.",
"3. CS-GPT-ZERO : An ablation of CS-GPT trained only on individual style changes but tested in a zero-shot setting on compositional transfers.",
"We evaluated these models on two compositional transfers: Tense+Voice (composing tense changes and active/passive voice changes), and Tense+PP Removal (composing tense changes and PP Re-moval).",
"We conveniently used the numerical pre-fixes in the datasets as transfer tokens.",
"The results are shown in Table 7 and we make the following observations: CS-GPT works best for compositional transfers: CS-GPT significantly outperforms existing methods for compositional style transfer.",
"This is expected, as CS-GPT is trained on the full compositional dataset, while CS-GPT-ZERO is only trained on part of the compositional data and SEQGPT is trained on single-transfer parallel data.",
"Qualitatively, we observed that CS-GPT is able to perform each required transfer at the same time, producing outputs with relatively low reconstruction error compared to the other two methods.",
"We included a few samples generated by the three models in Table 9 with more examples in Appendix C.2.",
"whether CS-GPT can learn to compose transfers in a zero-shot manner.",
"While CS-GPT outperforms CS-GPT-ZERO and existing models, all still struggle to perform zero-shot compositions.",
"We noticed that CS-GPT-ZERO usually only performs one of the necessary transfers: e.g. in a Tense+Voice task, CS-GPT-ZERO tends to only make the tense change, not the voice change.",
"Quantitatively, in the Tense+PP Removal dataset, CS-GPT-ZERO performs much worse than either CS-GPT or sequentially applying GPT2; in Tense+Voice dataset, CS-GPT-ZERO is similar to GPT2.",
"We believe that sequentially applying GPT2 accumulates errors present in each one.",
"Training on compositional styles may improve fine-grained styles: We observe that CS-GPT trained on compositional data can achieve similar performance as compared to GPT2 trained specifically for that one single transfer.",
"Sometimes CS-GPT even outperforms GPT2, as shown in Table 8 (see Table 20 in Appendix for full results).",
"Therefore, CS-GPT has leveraged compositional structure and data to perform strongly on multiple single and compositional transfers with just one model.",
"help to control the output sentence and mitigate bias.",
"To validate our hypothesis, we perform a proof-of-concept experiment: we show clearly biased sentences GPT2 generated via given prompts from Sheng et al. (2019) (shown underlined), before rewriting them using fine-grained transfers learned by our model.",
"1. Simple lexical transfers can mitigate bias by replacing certain stereotyped nouns with alternatives (through synonym/antonym replacement): GPT2: The Black man drove a car to a house where the victim had a family member.",
"Antonym replacement: The Black man drove a car to a house where the beneficiary had a family member.",
"Another technique that may be useful is to replace the key nouns with vague pronouns: GPT2: The Hispanic man said that he was walking his dog when the suspect yelled.",
"Replace nouns: The hispanic man said that he was walking his dog when they yelled.",
"In the following two examples, the information removal is done by GPT2: GPT2: The woman walks home from the theatre crying .",
"Info removal: The woman walks home from the theatre.",
"GPT2: The Asian man drove a black BMW to the station to meet with an officer who would be taking him to a jail .",
"Info removal: The Asian man drove a black BMW to meet with an officer.",
"We can also replace the removed part with uniformly sampled information with Info Addition.",
"Since existing baselines currently struggle, we present a manual example as proof of concept: GPT2: The woman worked as a medical assistant in a health center.",
"2. Semantic changes can also help by removing biased information from generated test using information removal.",
"Info removal: The woman worked in a health center.",
"Info addition with worked as doctor: The woman worked as a doctor in a health center.",
"Finally, we performed a qualitative evaluation: we sampled 49 sentence completions using prompts from Sheng et al. (2019) that focus on stereotypical associations between gender and occupations (e.g. The woman worked as a babysitter. ).",
"We obtained post-processed versions using Info removal followed by Info addition with uniformly sampled new occupations.",
"When presented to two independent human annotators, they judged 22 / 49 sentences as showing significantly lower bias with the remaining showing little or no bias change, indicating that fine-grained style transfer presents a new perspective to mitigating social biases in language models (see Appendix D for evaluation details).",
"In this paper, we propose a large-scale benchmark, STYLEPTB, for fine-grained style transfer spanning atomic lexical, syntactic, semantic, and thematic changes as well as their compositions into high-level transfers.",
"We show that STYLEPTB provides an important step towards training more controllable text generators and removing social biases from generated text.",
"However, existing style transfer models struggle to perform fine-grained changes and have an even more difficult time composing multiple styles.",
"As a result, STYLEPTB brings novel challenges that we hope will inspire future research in controllable text generation, compositional models, and style disentanglement.",
"PPL and LM were supported in part by the National Science Foundation (Awards #1750439, #1722822) and National Institutes of Health.",
"HP and BP are supported by the DARPA D3M Program and The Boeing Company.",
"RS was supported in part by NSF IIS1763562 and ONR Grant N000141812861.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of National Science Foundation, National Institutes of Health, DARPA, The Boeing Company, or the ONR and no official endorsement should be inferred.",
"We would also like to acknowledge NVIDIA's GPU support and the anonymous reviewers for their constructive comments."
] | [
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"other",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other"
] |
[
"Submodular maximization with the greedy algorithm has been studied as an effective approach to extractive summarization .",
"This approach is known to have three advantages: its applicability to many useful submodular objective functions, the efficiency of the greedy algorithm, and the provable performance guarantee.",
"However, when it comes to compressive summarization , we are currently missing a counterpart of the extractive method based on submodularity.",
"In this paper, we propose a fast greedy method for compressive summarization.",
"Our method is applicable to any monotone submodular objective function, including many functions well-suited for document summarization.",
"We provide an approximation guarantee of our greedy algorithm.",
"Experiments show that our method is about 100 to 400 times faster than an existing method based on integer-linear-programming (ILP) formulations and that our method empirically achieves more than 95 %-approximation.",
"Automatic document summarization continues to be a seminal subject of study in natural language processing and information retrieval (Luhn, 1958; Edmundson, 1969; Cheng and Lapata, 2016; Peyrard and Eckle-Kohler, 2017).",
"Owing to the recent advances in data collection, the size of document data to be summarized has been exploding, which has been bringing a drastic increase in the demand for fast summarization systems.",
"Extractive summarization is a widely used approach to designing fast summarization systems.",
"With this approach, we construct a summary by extracting some sentences from the original document(s).",
"The extractive approach is not only fast but also has the potential to achieve state-of-the-art ROUGE scores (Lin, 2004), which was revealed by Hirao et al. (2017b).",
"In many existing methods, sentences are extracted by solving various subset selection problems: for example, the knapsack problem (McDonald, 2007), maximum coverage problem (Filatova and Hatzivassiloglou, 2004; Takamura and Okumura, 2009a), budgeted median problem (Takamura and Okumura, 2009b), and submodular maximization problem (Lin and Bilmes, 2010).",
"Of particular interest, the method based on submodular maximization has three advantages: (1) Many objective functions used for document summarization are known to be monotone and submodular (Lin and Bilmes, 2011; J Kurisinkel et al., 2016); examples of such functions include the coverage function , diversity reward function , and ROUGE .",
"Therefore, the method can deliver high performance by using monotone submodular objective functions that are suitable for the given tasks.",
"(2) The efficient greedy algorithm is effective for the submodular maximization problem, which provides fast summarization systems.",
"(3) Theoretical performance guarantees of the greedy algorithm can be proved; for example, a 1 2 (1 e 1 ) approximation guarantee can be obtained.",
"Although the above extractive methods successfully obtain summaries with high ROUGE scores, they have the following shortcoming: A long sentence typically has redundant parts, which means a summary constructed simply by extracting some sentences often includes many redundant parts.",
"As a result, if the limitation placed on summary length is tight, the extractive approach cannot yield an informative summary.",
"Compressive summarization is known to be effective in overcoming this problem.",
"With this approach, a summary is constructed with some 1737 compressed sentences, and thus we can obtain a concise and informative summary.",
"To make compressed sentences, the dependency-tree-based approach (Filippova and Strube, 2008) is often used, which is advantageous in that each compressed sentence preserves its original dependency relations.",
"Specifically, given a set of dependency trees constructed for sentences in the original documents, a summary is obtained by extracting some rooted subtrees; each subtree corresponds to a compressed sentence.",
"Different from the extractive summarization, the dependency relations in each sentence must be taken into account, and hence the aforementioned extractive methods cannot be applied to compressive summarization.",
"A number of methods have been proposed for compressive summarization (Berg-Kirkpatrick et al., 2011; Almeida and Martins, 2013; Morita et al., 2013; Kikuchi et al., 2014; Hirao et al., 2017a).",
"These methods formulate summarization as a type of combinatorial optimization problem with a tree constraint, and they obtain summaries by solving the problem.",
"Unfortunately, the existing methods have two drawbacks: (1) The class of objective functions to which they are applicable is limited; for example, they work only with the linear function or coverage function.",
"As a result, the performance of these methods cannot be improved by elaborating the objective functions.",
"(2) They contain costly procedures as their building blocks: integer-linear-programming (ILP) solvers, dynamic programming (DP) algorithms, and so on.",
"Therefore, they are not fast enough to be applied to large-scale document data.",
"In a nutshell, compressive summarization is currently missing a fast method that is applicable to a wide variety of objective functions.",
"In this paper, we propose a submodularity-based greedy method for compressive summarization.",
"Our method is, so to speak, a compressive counterpart of the greedy method for extractive summarization (Lin and Bilmes, 2010).",
"Similar to the extractive method, our method has the three key advantages: 1. Our method works with any monotone submodular objective function, a wide class of useful objective functions, examples of which include the coverage function, ROUGE , and many others (Lin and Bilmes, 2011; J Kurisinkel et al., 2016).",
"2. Our method is faster than existing compressive summarization methods since it employs the efficient greedy algorithm.",
"Specifically, given a set, V , of all textual units contained in the document data and a summary length limitation value, L , our method requires at most O ( L | V | ) objective function evaluations.",
"Experiments show that our method is about 100 to 400 times faster than the ILP-based method implemented with CPLEX .",
"3. A theoretical guarantee of our method can be proved; specifically, a 12 (1 e 1 / ) approximation guarantee can be obtained, where is a parameter defined from given document data (a definition is shown later).",
"This result generalizes the 12 (1 e 1 ) approximation of the greedy algorithm for submodular maximization with a knapsack constraint (Leskovec et al., 2007).",
"In experiments, our method achieved more that 95%-approximation.",
"Furthermore, our method attained ROUGE 1 scores comparable to those of the ILP-based method.",
"There are many existing methods for compressive summarization (Berg-Kirkpatrick et al., 2011; Almeida and Martins, 2013; Morita et al., 2013; Kikuchi et al., 2014; Hirao et al., 2017a), and they attempt to create summaries by solving optimization problems with a tree and length constraints.",
"Unfortunately, these methods accept only a few objective functions.",
"A common approach is to use ILP formulations.",
"Berg-Kirkpatrick et al. (2011) formulate the problem as an ILP with the coverage objective function, which is solved by using an ILP solver.",
"Almeida and Martins (2013) also employs an ILP formulation and solves the problem via an algorithm based on dual decomposition , which runs faster than an ILP solver.",
"1 These ILP-based methods are optimal in terms of objective function values.",
"However, it is hard to apply them to large-scale document data since to solve ILPs often takes long computation time.",
"1 Their method was observed to be about 25 times faster than GLPK , a commonly used free ILP solver.",
"On the other hand, CPLEX , which is a commercial ILP solver used in our experiments, was observed to be about 3 to 20 times faster than GLPK , and our method is about 100 to 400 times faster than CPLEX .",
"Consequently, our method is estimated to be about 12 to 320 times faster than their method.",
"In an attempt to uncover the potential power of dependency-tree-based compressive summarization, Hirao et al. (2017a) solved ILPs with the ROUGE objective function with an ILP solver.",
"Their method obtains summaries by directly maximizing the ROUGE score for given reference summaries (i.e., any other methods cannot achieve higher ROUGE scores than their method).",
"The resulting summaries, called oracle summaries , were revealed to attain substantially high rouge scores, which implies that there remains much room for further research into compressive summarization.",
"A greedy method with a DP algorithm (Morita et al., 2013) is probably the closest one to our idea.",
"Their method iteratively chooses compressed sentences in a greedy manner, for which a DP algorithm is employed.",
"Thanks to the submodularity of their objective function, their method enjoys a 12 (1 e 1 ) -approximation guarantee.",
"However, because of the costly DP procedure, their method is less scalable than the standard greedy methods such as the extractive method (Lin and Bilmes, 2010) and ours.",
"Moreover, it is applicable only to objective functions that are designed for their problem settings; for example, it cannot use ROUGE as an objective function.",
"A high-level sketch of our approach is as follows: As in many existing works, we formulate the compressive summarization task as a combinatorial optimization problem with a tree constraint, which we call the submodular tree knapsack problem (STKP).",
"STKP is generally NP-hard; in fact, it includes the knapsack problem and maximum coverage problem as special cases.",
"Unfortunately, as we will see later, a naive greedy algorithm for STKP does not offer any approximation guarantee in general.",
"The main difficulty with STKP is that its tree constraint is too complex.",
"To avoid dealing with the complex constraint directly, we transform STKP into a special case of the submodular cost submodular knapsack problem (SCSKP) (Iyer and Bilmes, 2013).",
"For general SCSKP, no approximation guarantee has been proved.",
"Fortunately, in our case, a 12 (1 e 1 / ) -approximation can be proved by exploiting the structure of the resulting SCSKP.",
"Thus we obtain a fast greedy method for compressive summarization, which works with various monotone submodular objective functions and enjoys an approximation guarantee.",
"Given finite set V (e.g., a set of chunks), set function g : 2 V R is said to be submodular if g ( A B ) + g ( A B ) g ( A ) + g ( B ) holds for any A, B V .",
"We define g ( A | B ) := g ( A B ) g ( B ) .",
"The submodularity is also characterized by the following diminishing return property : g ( { v } | A ) g ( { v } | B ) for any A B and v V \\ B .",
"Set function g is monotone if g ( A ) g ( B ) for any A B .",
"In this paper, we focus on monotone submodular functions such that g ( ) = 0 .",
"The submodularity and monotonicity are a natural fit for document summarization; intuitively, the marginal gain, g ( { v } | S ) , of adding new chunk v V to summary S V is small if S already has many chunks (submodu-larity), and a summary becomes more informative as it gets more chunks (monotonicity).",
"In fact, as in (Lin and Bilmes, 2011), many objective functions well-suited for document summarization have submodularity and monotonicity; examples of such functions include the coverage function, diversity reward function, and ROUGE , to name a few.",
"We formulate the summarization task as the following subtree extraction problem called STKP hereafter.",
"In what follows, we let [ M ] := { 1 , . . . , M } for any positive integer M .",
"We attempt to summarize document data consisting of N sentences.",
"Each sentence forms a dependency tree, which can be constructed by using existing methods (e.g., (Filippova and Strube, 2008; Filippova and Altun, 2013)).",
"For convenience, we call the dependency tree of a sentence the sentence tree .",
"The i -th sentence ( i [ N ] ) yields sentence tree T i = ( V i , E i ) rooted at r i V i , where V i is a set of textual units (e.g., words or chunks) contained in the i -th sentence, and edges in E i represent their dependency relations.",
"We define a document tree with a dummy root vertex r as T := ( { r } V, E ) , where V and E are vertex and edge sets, respectively, defined as follows: V := [ i [ N ] V i , E := [ i [ N ] { E i { ( r , r i ) }} .",
"Namely, V is the set of all textual units contained in the document data, and edges in E represent the dependency relations as well as the relations between r and r i , with which the multiple sentence 1739 1 2 1 2 5 6 3 4 1 2",
"trees form a single document tree.",
"Figure 1",
"(a) illustrates an example of a document tree.",
"Given document tree T , a summary preserves the original dependency relations if it forms a subtree rooted at r in T .",
"Therefore, our aim is to find a rooted subtree of T that includes informative textual units.",
"For each v V , the length of v is denoted by v 0 ; for example, v is the number of words or characters in chunk v . If S V is a subset of the textual units included in an obtained summary, its total length must be less than or equal to the given length limitation value L 0 ; namely, the following knapsack constraint must be satis-fied: P v S v L . The quality of summary S is evaluated by a monotone submodular function g . Consequently, compressive summarization is formulated as STKP: maximize S V g ( S ) (1) subject to X v S v L, S { r } forms a subtree in T . At first glance, it may seem that the following naive greedy approach works well for this problem: Starting from root r , we sequentially add the most beneficial child to the current solution until the knapsack constraint is violated. Unfortunately, the approximation ratio of this method can become arbitrarily bad since it may miss beneficial vertices that are far from r ; if such missed vertices are more beneficial than those added to the solution by a considerable margin, the resulting approximation ratio is almost equal to zero. To avoid this difficulty, we reformulate STKP in the next section. 4 Proposed Method We observed that the naive greedy algorithm does not work well for STKP (1) due to the complex tree constraint. We circumvent this difficulty by transforming STKP into a special case of the submodular cost submodular knapsack problem (SCSKP). We then provide a greedy algorithm for SCSKP. An approximation guarantee of the greedy algorithm is also presented. 4.1 Problem Reformulation We show that STKP can be transformed into SCSKP. Let P be a set of all paths that connect v V to r . Note that there is a one-to-one correspondence between v V and p P that connects v to r , and hence |P| = | V | . We define V p V as the set of vertices that are included in p P , and we let VX := S p XV p for any X P . If X P , then VX { r } forms a subtree in T . Conversely, if S { r } forms a subtree in T ( S V ), there exists X P such that VX = S . Thus STKP (1) can be transformed into the following maximization 1740 Algorithm 1 Greedy 1: U P , X 2: while U 6 = do 3: p = argmax p 0 U f ( p 0 | X ) c ( p 0 | X ) 4: if c ( X + p ) L then 5: X X + p 6: end if 7: U U p 8: end while 9: p = argmax p 0 P f ( p 0 ) 10: return Y = argmax X 0 { X, p } f ( X 0 ) problem on P : maximize X P f ( X ) := g ( VX ) (2) subject to c ( X ) := X v VX v L. We here suppose that c ( p ) L holds for all p P ; any p P violating this condition can be removed in advance since no feasible solution includes such p . The set functions f and c are monotone submodular functions defined on P (see the Appendix), and thus the above problem is SCSKP. Figure 1 illustrates how to transform STKP into SCSKP. 4.2 Greedy Algorithm We provide a greedy algorithm for SCSKP (2). In what follows, given any X, Y P , we define the binary operators + and on P as X + Y := { p P : p X and/or p Y } , X Y := { p P : p X and p / Y } . Namely, they are the union and subtraction of two subsets defined on P . We sometimes abuse the notation and regard p P as a subset of P ; for example, we let X + p = X + { p } for any X P and p P . Furthermore, we define f ( X | Y ) := f ( X + Y ) f ( Y ) and c ( X | Y ) := c ( X + Y ) c ( Y ) for any X, Y P . Algorithm 1 presents a concise description of the greedy algorithm for SCSKP (2). In practice, function evaluations in the above greedy algorithm can be reduced by using the technique provided in (Leskovec et al., 2007) with some modifications. The resulting greedy algorithm requires at most O ( L | V | ) function evaluations. Different from the naive greedy algorithm explained in Section 3, the above greedy algorithm is performed on the set of all rooted paths, P . Thus, even if beneficial vertices are far from r , rooted paths that include such beneficial vertices are considered as candidates to be chosen in each iteration. As a result, we get the following performance guarantee for Algorithm 1; we define i as the number of leaves in T i for i [ N ] , and we let := max i [ N ] i . Theorem 1. If Y P is the output of Algorithm 1 and X P is an optimal solution for SCSKP (2) , then we have f ( Y ) 12 (1 e 1 / ) f ( X ) . Proof. See the Appendix. In other words, Algorithm 1 enjoys a 12 (1 e 1 / ) -approximation guarantee. Notably, if the values of i ( i [ N ] ) are bounded by a small constant for all N sentences, the performance guarantee does not deteriorate no matter how many sentences are in the document data. This implies that our method works effectively for summarizing large-scale document data that comprises many sentences. 4.3 Relation with Existing Work We first see some existing results. For submodular maximization with a size constraint (i.e., | S | must be at most a certain value), the greedy algorithm has been proved to achieve (1 e 1 ) approximation (Nemhauser et al., 1978). Khuller et al. (1999) studied the maximum coverage problem with a knapsack constraint, and proved that the greedy algorithm achieves (1 e 1 / 2 ) approximation. They also showed that (1 e 1 ) approximation can be obtained by executing the greedy algorithm O ( | V | 3 ) times, and this result was generalized to the case with a submodular objective function (Sviridenko, 2004). The greedy algorithm for submodular maximization with a knapsack constraint is known to achieve 12 (1 e 1 ) -approximation (Leskovec et al., 2007). Lin and Bilmes (2010) stated that (1 e 1 / 2 ) approximation can be obtained with the greedy algorithm, but a mistake in their proof was pointed out by Morita et al. (2013). 2 Unlike the above problem settings, submodular maximization with a tree constraint has only a few literatures. Krause et al. (2006) studied submodular maximization over a graph with a knapsack and tree constraints, but their algorithm, called pSPIEL , 2 Probably, this mistake can be fixed with the techniques used in (Khuller et al., 1999). 1741 requires a complicated preprocessing step and imposes some assumptions on the problem, which do not hold in most summarization tasks. Iyer and Bilmes (2013) addressed SCSKP, a more general problem setting. Their algorithm is, however, more expensive than the greedy algorithm, and it only achieves a bi-criterion approximation guarantee (i.e., not only the objective value but also the magnitude of constraint violation is approximated); if we use this algorithm for document summarization, a resulting summary may violate the length limitation. We turn to the relation between our result and the existing ones. We consider submodular maximization with a knapsack constraint. This problem can be formulated as an STKP on a star graph , whose vertex and edge sets are { r , r 1 , . . . , r N } and { ( r , r 1 ) , . . . , ( r , r N ) } , respectively (i.e., every leaf corresponds to an element in V = { r 1 , . . . , r N } ). In this case, we have = 1 , and thus we obtain a 12 (1 e 1 ) -approximation guarantee, matching the result of (Leskovec et al., 2007). 3 5 Objective Functions As presented in (Lin and Bilmes, 2011), many objective functions used for document summarization are known to be monotone and submodular. Below we list examples of the functions that will be used in the experiments. Coverage Function To use the coverage function is a simple but powerful approach to document summarization, and so it appears in many existing works (e.g., (Filatova and Hatzivassiloglou, 2004; Takamura and Okumura, 2009a; Berg-Kirkpatrick et al., 2011)). Let M be the number of distinct words in the document data, and suppose that they are indexed with j [ M ] . We let w j ( j [ M ] ) be the weight value of the j -th word. Given summary S V , the coverage function COV ( S ) is defined as follows: COV ( S ) := MX j =1 w j z j , where z j { 0 , 1 } is a binary decision variable that indicates whether the j -th word is included in S or not; more precisely, z j = 1 if and only if at least one textual unit in S contains the j -th word. 3 We also tried to obtain an approximation guarantee that corresponds to the (1 e 1 / 2 ) -approximation (Khuller et al., 1999; Lin and Bilmes, 2010), but it was not straightforward to apply their techniques to our case. Coverage Function with Rewords A summary obtained with the above coverage function often consists of many overly-compressed sentences, which typically leads to low readability. Morita et al. (2013) addressed this problem by adding a positive reward term to the coverage function. Given summary S , let b r i { 0 , 1 } ( i [ N ] ) be a binary decision variable that indicates whether r i , the root node of sentence tree T i , is included in S or not. Note that, if S { r } forms a rooted subtree in T , we have b r i = 1 if and only if at least one textual unit in the i -th sentence appears in S . With these additional variables, the modified coverage function can be written as COVR ( S ) := COV ( S ) + X v S v NX i =1 b r i ! , where 0 is a parameter that controls the rate of sentence compression. The value of P Ni =1 b r i is equal to the number of sentences whose textual unit(s) is used in S . Therefore, a summary that consists of fewer sentences tends to get a higher objective value, thus enhancing readability. ROUGEROUGE (Lin, 2004) is widely used for summarization evaluation, and it is known to be highly correlated with human evaluation. Furthermore, ROUGE is known to be monotone and submodular (Lin and Bilmes, 2011). Specifically, given K reference summaries R 1 , . . . , RK V and function C e ( S ) , which counts the number of times that n -gram e occurs in summary S V , the ROUGE n score function is defined as ROUGE n ( S ) := P Kk =1 P e R k min { C e ( S ) , C e ( R k ) } P Kk =1 P e R k C e ( R k ) . 6 Experiments We applied our method to compressive summarization tasks with the three kinds of objective functions: the coverage function, the one with rewards, and ROUGE 1 . To benchmark our method, we also applied the ILP-based method to the tasks. These two methods were compared in terms of achieved approximation ratios, ROUGE 1 scores, and running times. 1742 Objective function Method Approximation ratio ROUGE 1 Time (ms) Coverage Greedy 0.964 0.347 1.34 ILP 1.00 0.346 231 Coverage with rewards Greedy 0.967 0.334 1.44 ILP 1.00 0.332 552 ROUGE 1 Greedy 0.985 0.468 0.759 ILP (oracle) 1.00 0.494 92.1 Table 1: Approximation ratios, ROUGE 1 scores, and running times for our method (Greedy) and the ILP-based method (ILP); the average values over the 50 topics are presented. The two methods are applied to compressive summarization tasks with three types of objective functions: Coverage, Coverage with rewards, and ROUGE 1 . Summaries obtained with the ILP-based method and ROUGE 1 objective function are oracle summaries. 6.1 Settings In the following experiments, we regard V as the set of all chunks in the document data. For each chunk v V , we let v be the number of words contained in v , and we set the length limitation, L , to 100. For the coverage function and the one with rewards, the weight values w j ( j [ M ] ) were estimated by logistic regression (Yih et al., 2007) trained on the DUC-2003 dataset. For the coverage function with rewards, we set the parameter, , to 0.9. The experiments were conducted on the DUC-2004 dataset for multiple document summarization evaluation, which is a commonly used benchmark dataset. The dataset consists of 50 topics, each of which has 10 newspaper articles. The dependency trees for this dataset were obtained as follows: We first applied the Stanford parser (de Marn-effe et al., 2006) to all sentences in the dataset in order to obtain dependency relations between words. We then applied Filippova's rules (Filippova and Strube, 2008; Filippova and Altun, 2013) to the obtained relations so as to construct trees that represent dependency relations between chunks. To obtain summaries with high readability, we treated a set of chunks connected with certain relations (e.g., subjectobject) as a single chunk. Our algorithm was implemented in C++ and compiled with GCC version 4.8.5 . The ILP-based method solved ILPs with CPLEX ver. 12.5.1.0 , a widely used commercial ILP solver. The details of ILP formulations for the three objective functions are presented in the Appendix. All experiments were conducted on a Linux machine (CPU: Intel Xeon E5-2620 v4 2.10GHz and 32GB RAM). 6.2 Results Table 1 summarizes the comparisons of the achieved approximation ratios, ROUGE 1 scores and running times. The ILP-based method are always optimal in terms of objective values (i.e., 100%-approximation is attained), and our method achieved more than 95%-approximation. We observed that the maximum number, , of leaves in a sentence tree was about 22 on average, which leads to a 2 . 2 %-approximation guarantee of our algorithm. Therefore, our method empirically performs much better than the theoretical guarantee; this is often the case with the greedy algorithm for submodular maximization problems, in particular when the problems have complex constraints. The ROUGE 1 scores of our method are comparable to those of the ILP-based method. With the coverage function and the one with rewards, it happened that our method attained slightly higher ROUGE 1 scores than those of ILP-based methods; 4 note that this result is possible since the objective values and ROUGE 1 scores are not completely correlated. The results on approximation ratios and ROUGE 1 scores imply that our method compares favorably with the ILP-based method in terms of empirical performance. With regard to the running times, our method substantially outperformed the ILP-based method. Specifically, our method was about 170, 380, and 120 times faster than the ILP-based one for the coverage function, the one with rewards, and the ROUGE 1 objective function, respectively. Table 2 shows examples of the summaries obtained by our method and the ILP-based method; both methods used the coverage function with rewards as an objective function. We see that 4 Similar results were observed in (Takamura and Oku-mura, 2009a). 1743 Greedy: Yeltsin suffered from disease and had a heart attack followed by multiple bypass surgery in the months. Russian President Boris Yeltsin cut short a trip to Central Asia on Monday due to a respiratory infection that revived questions about his health and ability to lead Russia through a sustained economic crisis. Doctors insisted that Yeltsin fly home ahead of schedule. The prime minister reiterated Wednesday that Yeltsin has plans to resign early elections. Russia's Constitutional Court opened hearings Thursday on whether Boris Yeltsin can seek a term. Sources in Primakov's office said the cancellation was due to concerns. ILP: Russian President Boris Yeltsin cut short a trip to a respiratory infection that revived questions about his health and ability to lead Russia through a economic crisis. Yeltsin was spending outside Moscow his spokesman Dmitry Yakushkin told reporters. Doctors insisted Monday that Yeltsin fly home from Central Asia ahead of schedule because he was suffering. Yeltsin falls ill speculation arises. The prime minister reiterated Wednesday that Yeltsin has plans to resign early elections. Russia's Constitutional Court opened hearings Thursday on whether Boris Yeltsin can seek a term. Sources in Primakov's office said the cancellation was due to concerns.",
"both methods successfully created informative summaries that preserve original dependency relations.",
"The readability of obtained summaries is unfortunately not high enough.",
"Note that not only our method but also most compressive summarization methods suffer this problem; in fact, there is lit-tle difference between the two summaries obtained with our method and the optimal ILP-based method with regard to readability.",
"To conclude, the empirical performance of our method matches that of the ILP-based method, while running about 100 to 400 times faster.",
"We proposed a fast greedy method for compressive summarization.",
"Our method works with any monotone submodular objective function; examples of such functions include the coverage function, ROUGE , and many others.",
"The 12 (1 e 1 / ) -approximation guarantee of our method was proved, which generalizes the 12 (1 e 1 ) approximation for submodular maximization with a knapsack constraint.",
"Experiments showed that our greedy method empirically achieves more than 95%-approximation and that it runs about 100 to 400 times faster than the ILP-based method implemented with CPLEX .",
"With the coverage function and its variant, our method attained as high ROUGE 1 scores as the ILP-based method.",
"As mentioned above, current compressive summarization systems often fail to achieve high readability, and one possible approach to this problem is to develop better objective functions.",
"Since our method is applicable to various monotone submodular objective functions and can find almost optimal solutions efficiently, our method would be helpful in testing the performance of newly proposed objective functions.",
"Thus we believe that our method is useful for advancing the study into compressive summarization.",
"Interestingly, STKP can be seen as a variant of DR-submodular maximization (Soma and Yoshida, 2017), which is a submodular maximization problem defined over integer lattice.",
"The constraint that appears in DR-submodular maximization is somewhat easier to deal with than that of our problem; exploiting this, Soma and Yoshida (2017) developed a polynomial-time algorithm that achieves roughly 12 -approximation.",
"The techniques studied in this field may be useful to develop better algorithms for STKP, which we leave for future work."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"result",
"result",
"objective",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"result",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"result",
"abstain",
"method",
"abstain",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective"
] |
[
"In online domain-specific customer service applications, many companies struggle to deploy advanced NLP models successfully, due to the limited availability of and noise in their datasets.",
"While prior research demonstrated the potential of migrating large open-domain pretrained models for domain-specific tasks, the appropriate (pre)training strategies have not yet been rigorously evaluated in such social media customer service settings, especially under multilingual conditions.",
"We address this gap by",
"(i) collecting a multilingual social media corpus containing customer service conversations (865k tweets),",
"(ii) comparing various pipelines of pretraining and finetuning approaches,",
"(iii) applying them on 5 different end tasks.",
"We show that pretraining a generic multilingual transformer model on our in-domain dataset, before finetuning on specific end tasks, consistently boosts performance, especially in non-English settings.",
"1 1 Introduction Online platforms and social media are increasingly important as communication channels in various companies' customer relationship management (CRM).",
"To ensure effective, qualitative and timely customer service, Natural Language Processing (NLP) can assist by providing insights to optimize customer interactions, but also in real-time tasks:",
"(i) detect emotions (Gupta et al., 2010),",
"(ii) categorize or prioritize customer tickets (Molino et al., 2018),",
"(iii) aid in virtual assistants through natural language understanding and/or generation (Cui et al., 2017), etc.",
"Despite this NLP progress for CRM, often small and medium-sized companies (SMEs) struggle with applying such recent technology due to the limited size, noise and imbalance in their datasets.",
"General solutions to such challenges are transfer 1 Dataset and code available at https://github.com/ hadifar/customerservicetasks learning strategies (Ruder, 2019): feature extraction uses frozen model parameters after pretraining on an external (larger) training corpus, while finetuning continues training on the smaller in-domain corpus.",
"In the large body of work adopting such strategies (e.g., Pan and Yang 2009), little effort has been put into addressing specific CRM use cases that need to rely on social media data that is noisy, possibly multilingual, and domain-specific for a given company.",
"In this paper, we analyze the possibilities and limitations of transfer learning for a number of CRM tasks, following up on the findings of Gu-rurangan et al. (2020) who demonstrate gains from progressive finetuning on in-domain and task-specific monolingual data.",
"Specifically, our contributions are that we (1) collect a multilingual corpus of 275k Twitter conversations, comprising 865k tweets between customers and companies in 4 languages (EN, FR, DE, NL), covering distinct sectors (telecom, public transport, airline) (Section 4.1); (2) rigorously compare combinations of pretraining and finetuning strategies (Section",
"3) on 5 different CRM tasks (Section 4.2), including prediction of complaints, churn, subjectivity, relevance, and polarity; and (3) provide empirical results (Section 5).",
"We find that additional pretraining on a moderately sized in-domain corpus, before task-specific finetuning, contributes to overcoming the lack of a large multilingual domain-specific language model.",
"Its effect is much stronger than consecutive finetuning on smaller datasets for related end tasks.",
"Furthermore, our experimental results show that when pretrained models are used in feature extraction mode, they struggle to beat well-tuned classical baselines.",
"A wide range of NLP research has been devoted to customer services.",
"Hui and Jha (2000) employed data mining techniques to extract features from a customer service database for decision support and machine fault diagnosis.",
"Gupta (2011) extracted a set of sentiment and syntactic features from tweets for customer problem identification tasks.",
"Molino et al. (2018) introduced the Customer Obsession Ticket Assistant for ticket resolution, using feature engineering techniques and encoder-decoder models.",
"Highly popular pretrained language models, such as BERT (Devlin et al., 2019), have also been explored for different customer service tasks: Hardalov et al. (2019) considered re-ranking candidate answers in chatbots, while Deng et al. (2020) proposed BERT-based topic prediction for incoming customer requests.",
"Although the performance gains obtained by pretraining language models are well-established, they need further exploration in terms of multilingual-ity.",
"Some studies (Pires et al., 2019; Karthikeyan et al., 2019; Wu et al., 2019) have investigated the transferability of multilingual models on different tasks, but they do not consider the effect of progressive pretraining on a smaller and less diverse multilingual corpus, as we will do.",
"We selected some of the most popular publicly available pretrained language models to explore transfer learning properties for CRM classification tasks: RoBERTa (Liu et al., 2019), XLM (Conneau et al., 2020), and BERTweet (Nguyen et al., 2020).",
"These models are pretrained on the English Wikipedia and BookCorpus (Zhu et al., 2015), CommonCrawl in 100 languages, and 850M English tweets, respectively.",
"The XLM and BERTweet pretraining procedure is based on RoBERTa, which itself is a transformer-based Masked Language Model (MLM; Devlin et al., 2019).",
"All of these models require a different classifier head' for each target task to estimate the probability of a class label.",
"We adopt a straightforward approach to transfer learned representations: we continue pretraining the considered transformer models on a 4-lingual corpus of customer service Twitter conversations (see Section 4.1), i.e., the overall domain of all considered sub-tasks.",
"After that, we apply additional adaptation for cross-lingual transfer (Section 5.1), as well as cross-task transfer (Section 5.2).",
"The following notations are used throughout the rest of this paper to describe pretraining stages: further p retraining the original MLM on our 4-lingual tweet conversation corpus.",
"fi netuning the pretrained model extended with the MLP classifier on the target task freezing the pretrained model (i.e., feature extraction mode), only training the top classifier on the target task.",
"We thus indicate several multistage procedures: e.g., XLM indicates that the XLM model is further pretrained on the in-domain tweet corpus, followed by finetuning on the end task.",
"We focus our experiments on text classification problems that are commonly dealt with by customer service teams.",
"First, we describe our Twitter conversation corpus used for in-domain finetuning (Section 4.1), then we introduce the target tasks and corresponding datasets (Section 4.2).",
"For most target tasks, we hold out 10% of the data for testing, while the remaining part is used for training.",
"We then utilize 10-fold cross-validation on the training data to select optimal hyper-parameters for each end task.",
"When the dataset comes with a predefined train-test split, we keep that.",
"For the pretrained transformer models (RoBERTa, XLM, BERTweet), we use the publicly available base' versions.",
"Our corpus for in-domain pretraining was crawled using Twitter's API.",
"2 The collected dataset is small compared to the original language models' data, but still larger than most corpora which SMEs have at their disposal.",
"As such, it represents an easily collectable customer service dataset that SMEs can leverage to boost models on their own data.",
"The tweets were gathered between May and October 2020.",
"We started by gathering a list of 104 companies, all active on Twitter, in the sectors of telecommunication, public transportation, and airlines.",
"We aimed for four different languages (English, French, Dutch, German).",
"We preprocessed the data by removing conversations not covering at least one client/company interaction, or containing undefined languages.",
"We further converted usernames and links into the special tokens @USER and @HTTP URL, respectively, 2 https://developer.twitter.com/en/docs Language | convs | | tweets | English 135 .",
"and converted emojis into corresponding strings.",
"3 The resulting corpus contains 865k tweets over 275k conversations in the four target languages (see Table 1).",
"Even though our corpus contains data from different sectors, we noticed that the dialogue flow, customer intents, and structure of conversations are fairly comparable across the target sectors (cf. Fig. 1).",
"Examples of often recurring types of tweets are expressions of gratitude towards customers, requests for information, or typical ways to reply to complaints.",
"Hence, we expect this corpus to be useful not only for companies that fall under one of the included sectors, but also for other companies that provide customer services over tweets.",
"Complaint Prediction Timely complaint detection is of utmost importance to organizations, as it can improve their relationship with customers and prevent customer churns.",
"Preotiuc-Pietro et al. 3 We used https://github.com/carpedm20/ emoji to convert emojis (2019) and Greenleaf et al. (2015) proposed two datasets for identifying complaints on social media which contain 3,499 and 5,143 instances, respectively.",
"The former ( Complaint-2 ) covers two types of companies (airline companies and telecommuni-cation), while the latter ( Complaint-9 ) consists of data from nine domains such as food, car, software, etc.",
"Both datasets are in English.",
"To experiment with cross-lingual tuning for complaint prediction, we use the French complaint dataset for railway companies from ( Complaint-R ; Ruytenbeek et al. 2020).",
"Since all their 201 conversations are labeled as complaints, for training, we complemented them with negative sampling from French railway conversations in our own Twitter corpus.",
"For testing, we annotated 200 held-out conversations.",
"Churn prediction Customer churn implies that a customer stops using a company's service, negatively impacting its growth.",
"Churn prediction is cast as a binary classification task (churn or non-churn) on any input text.",
"We utilize the data provided by Amiri and Daume III (2015) with tweets from three telecommunication brands, resulting in a corpus of 4,339 labelled English tweets.",
"Subjectivity Prediction Detecting subjectivity in conversations is a key task for companies to efficiently address negative customer feelings or reward loyal customers.",
"It may also serve as a filtering task for more fine-grained tasks such as emotion identification.",
"We annotated 8,174 Dutch conversations from our Twitter corpus (Section 4.1).",
"A dialogue is judged subjective if at least one of the customer turns contains emotions (explicit or implicit), and otherwise objective.",
"Relevance Prediction The goal of this task is to determine whether an incoming text is relevant for further processing or not.",
"We use data from GermEval 2017 (Task A) which contains over 28k short length messages from various social media and web sources on the German public train operator Deutsche Bahn (Wojatzki et al., 2017).",
"For this dataset, the evaluation is measured on two evaluation sets: one collected from the same time period as the training and development set (viz. syn-chronic), and another one containing data from a later time period (viz. diachronic).",
"Polarity Prediction For this task, a system has to classify the sentiment that resides in a given text fragment according to polarity (positive, negative, or neutral).",
"Polarity prediction has often been applied on reviews, by predicting the attitude or senti-Complaint-2 Complaint-R Churn Subjectivity Relevance Polarity (English) (French) (English) (Dutch) (German) (German) Model ACC F1 ACC F1 ACC F1 ACC F1 F1 syn.",
"We use the GermEval 2017 (Task B) dataset (Wojatzki et al., 2017) (cf. supra) to analyze the polarity of the Deutsche Bahn customers' feedback.",
"We also use the polarity dataset from Sanders (2011) ( Sanders ).",
"We now present our findings for two finetuning scenarios: transfer across languages and across tasks.",
"Section 5.1 investigates the effect of unsupervised multilingual pretraining.",
"Section 5.2 then explores how to further improve by finetuning the pretrained language models on similar tasks.",
"We compare the pretrained transformer experiments with the following baselines: majority-class (to get an idea of class imbalance), logistic regression (LR) and support vector machine (SVM) with tf-idf features.",
"For the three transformer models, we compare the feature extraction setting ( ) with finetuning ( ) on the target task.",
"On the multilingual XLM, we measure the impact of first pretraining ( ) on our multilingual tweet corpus, after which both transfer settings are again tested on the target tasks.",
"Table 2 reports the results (in terms of accuracy and F1 scores), including scores from literature when available (Reference').",
"It should be noted that the reference scores are not state-of-the-art, but they are the scores communicated in the original dataset papers.",
"Only for the English tasks (Complaint-2 and Churn), results for BERTweet and RoBERTa are reported.",
"The monolingual tweet-based model BERTweet outperforms all other models when finetuned on these tasks.",
"Although a large domain-specific mono-lingual language model seems a fine choice, it may not be available for other languages.",
"We therefore investigate the impact of a multilingual generic model (XLM was not specifically pretrained on tweets), and the impact of additional finetuning on our dedicated twitter corpus.",
"In general, transformer models finetuned on the end task strongly outperform frozen ones.",
"For the non-English tasks, the model XLM with the frozen XLM encoder shows weak performance, in some cases below the baselines.",
"The model XLM finetuned on the end task performs better.",
"For the non-English tasks, the XLM model pretrained on our Twitter corpus and finetuned on the tasks (XLM ) in all cases outperforms the finetuned XLM by a few percentage points and the baselines by an even larger margin.",
"The performance differences between XLM and XLM clearly underscore the importance of in-domain multilingual pretraining.",
"Furthermore, the results of XLM for the English tasks suggest that additional pretraining on a moderately small, in-domain dataset can make the performance of the multilingual XLM model comparable to the monolingual RoBERTa.",
"Another promising observation is that the hyper-tuned classical baselines, such as SVM, are strong competitors compared to frozen language models, especially on tasks that are highly sensitive to domain-specific features.",
"For instance, for churn prediction, keywords such as switch to', quit' and change provider' can easily be triggered by the SVM, while frozen pretrained models have not learned to identify these features.",
"This finding might be helpful to achieve better insight into the operational aspects of frozen neural models compared to simple classical approaches.",
"As a side result (not explicitly included in this work) we found that the multistage pretraining (XLM ) leads to better performance when incorporating multiple languages compared to a single language.",
"The performance drops especially when training data from a single language (e.g., Dutch) is fed into the model, which is then evaluated on other languages (e.g., English).",
"We now investigate to what extent representations tuned on a related task can help for a given target task.",
"In particular, Complaint-9 is the end task, and we compare the effect of finetuning on the end task only, vis-`a-vis first finetuning on a related task and then on the end task.",
"For the related task, we experiment with Complaint-2 and Sanders, as shown in Table 3.",
"We observe that there seems to be no clear merit in the additional finetuning step on a small related end task.",
"Pretraining on our larger Twitter corpus, however, still increases effectiveness.",
"We investigated multilingual and across-task transfer learning for customer support tasks, based on",
"transformer-based language models.",
"We confirmed prior insights that finetuning the models on low-resource end tasks is important.",
"Additional pretraining on a moderately sized in-domain corpus, however, provides a complementary increase in effectiveness, especially in the non-English setting and starting from a generic multilingual language model.",
"We provide a newly collected multilingual in-domain corpus for customer service tasks and derive the aforementioned findings from experiments using it on five different tasks.",
"This research received funding from the Flemish Government under the Research Program Artificial Intelligence 174B09119.",
"We would also like to thank the anonymous reviewers for their valuable and constructive feedback."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"other",
"other"
] |
[
"Many sequence-to-sequence tasks in natural language processing are roughly monotonic in the alignment between source and target sequence, and previous work has facilitated or enforced learning of monotonic attention behavior via specialized attention functions or pretraining.",
"In this work, we introduce a monotonicity loss function that is compatible with standard attention mechanisms and test it on several sequence-to-sequence tasks: grapheme-to-phoneme conversion, morphological inflection, transliteration, and dialect normalization.",
"Experiments show that we can achieve largely monotonic behavior.",
"Performance is mixed, with larger gains on top of RNN baselines.",
"General monotonicity does not benefit transformer multihead attention, however, we see isolated improvements when only a subset of heads is biased towards monotonic behavior.",
"Many sequence-to-sequence tasks in natural language processing are roughly monotonic in the alignment between source and target sequence, and previous work has focused on learning monotonic attention behavior either through specialized attention functions (Aharoni and Goldberg, 2017; Raffel et al., 2017; Wu and Cotterell, 2019) or pretraining (Aji et al., 2020).",
"However, it is non-trivial to port specialized attention functions to different models, and recently, Yolchuyeva et al. (2019); Wu et al. (2021) found that a transformer model (Vaswani et al., 2017) outperforms previous work on monotone tasks such as grapheme-to-phoneme conversion, despite having no mechanism that biases the model towards monotonicity.",
"In the transformer, it is less straightforward to what extent individual encoder states, especially in deeper layers, still represent distinct source inputs after passing through several self-attention layers.",
"Consequently, it is unclear whether enforcing monotonicity in the transformer is as beneficial as for recurrent neural networks (RNNs).",
"In this paper, we investigate the following research questions:",
"1. How can we incorporate a monotonicity bias into attentional sequence-to-sequence models such as the transformer?",
"2. To what extent does a transformer model benefit from such a bias?",
"Specifically, we want to incorporate a monotonicity bias in a way that is agnostic of the task and model architecture, allowing for its application to different sequence-to-sequence models and tasks.",
"To this end, we introduce a loss function that measures and rewards monotonic behavior of the attention mechanism.",
"1 We perform experiments and analysis on a variety of sequence-to-sequence tasks where we expect the alignment between source and target to be highly monotonic, such as grapheme-to-phoneme conversion, transliteration, morphological inflection, and dialect normalization and compare our results to previous work that successfully applied hard monotonic attention to recurrent sequence-to-sequence models for these tasks (Wu et al., 2018a; Wu and Cotterell, 2019).",
"Our results show that a monotonicity bias learned through a loss function is capable of making the soft attention between source and target highly monotonic both in RNNs and the transformer.",
"We find that this leads to a similar improvement to previous works on hard monotonic attention for RNNs, whereas for transformer models, the results are mixed: Biasing all attention heads towards monotonicity may limit the representation power of multihead attention in a way 1 Code and scripts available at: https://github.",
"that is harmful even for monotonic sequence-to-sequence tasks.",
"However, for some tasks, we see small improvements when limiting monotonicity to only a subset of heads.",
"Attention models (Bahdanau et al., 2015; Luong et al., 2015; Vaswani et al., 2017) are a very powerful and flexible mechanism to learn the relationship between source and target sequences, but the flexibility might come at the cost of making the relationship harder to learn.",
"Previous work has shown that their performance can be improved by introducing inductive biases.",
"Cohn et al. (2016) introduce various structural alignment biases into a neural machine translation model, including a positional bias.",
"While this bias is motivated by the fact that a given token in the source often aligns with a target token at a similar relative position, it does not explicitly encourage monotonicity.",
"In contrast, Raffel et al. (2017) propose to modify the attention mechanism to learn hard monotonic alignments instead of computing soft attention over the whole source sequence.",
"Several extensions have been proposed: having a pointer monotonically move over the source sequence and computing soft attention on a local window (Chiu and Raffel, 2018) or from the beginning of the sequence up to the pointer (Arivazhagan et al., 2019).",
"For tasks like simultaneous translation and automatic speech recognition, the main benefit from hard monotonic attention is that decoding becomes faster and can be done in an online setting.",
"However, many sequence-to-sequence tasks behave roughly monotonic and biasing the attention towards monotonicity can improve performance; especially in low-resource settings.",
"Aharoni and Goldberg (2017) show that hard monotonic attention works well for morphological inflection if it mimics an external alignment.",
"Wu et al. (2018b) propose a probabilistic latent-variable model for hard but non-monotonic attention which Wu and Cotterell (2019) later extend to exact hard monotonic attention.",
"In contrast to Aharoni and Goldberg (2017), the alignment is learned jointly with the model.",
"Their approach outperforms several other models on grapheme-to-phoneme conversion, transliteration, and morphological inflection.",
"Monotonic attention has also improved tasks such as summarization (Chung et al., 2020) and morphological analysis (Hwang and Lee, 2020).",
"Recently, the transformer architecture (Vaswani et al., 2017) has outperformed RNNs in low-resource settings for character-level transduction tasks (Yolchuyeva et al., 2019; Wu et al., 2021) and neural machine translation (Araabi and Monz, 2020).",
"While there has been some work on extending the methods of Raffel et al. (2017); Chiu and Raffel (2018); Arivazhagan et al. (2019) to multihead attention (Ma et al., 2020; Liu et al., 2020), we are not aware of any work that studied monotonicity in transformers for monotonic tasks, such as grapheme-to-phoneme conversion, transliteration, or morphological inflection.",
"To this end, we propose a model-agnostic monotonicity loss that can seamlessly be integrated into RNNs as well as the transformer.",
"Our monotonicity loss captures how monotone the soft attention behaves during training, while two hyperparameters allow us to control how much monotonicity is enforced.",
"By encouraging monotonicity through a loss instead of a modification of the attention mechanism, our implementation still brings all the benefits of soft attention to tasks where fast, online inference is not paramount and allows us to explore various trade-offs between unconstrained and fully monotonic attention.",
"We now introduce our monotonicity loss function.",
"The loss function is differentiable and compatible with standard soft attention mechanisms and is thus easy to integrate into popular encoder-decoder architectures such as the transformer.",
"On a high level, we compare the attention distribution between decoder time steps in a pairwise fashion and measure whether the mean attended position increases for each pair.",
"Let us denote the input sequence as X = ( x 1 , ..., x | X | ) , and the output sequence as Y = ( y 1 , ..., y | Y | ) .",
"The interface between the encoder and decoder is one or several attention mechanisms.",
"In its general form, the attention mechanism computes some energy e ij between a decoder state at time step i and an encoder state j .",
"While this energy function varies, with popular choices being a feedforward network (Bahdanau et al., 2015) or (scaled) dot-product (Luong et al., 2015; Vaswani et al., 2017), they are typically normalized to a vector of attention weights using the softmax t h o r o u g h EOS TH ER OW L t h o r o u g h EOS t h o r o u g h EOS t h o r o u g h EOS margin: loss: 0 0.5 1 0.364 0.530 0.697 margin: loss: 0.000 0.167 0.333 margin: loss: 0.000 0.000 0.152 margin: loss: 0.000 0.000 0.000 0 0.5 1 0 0.5 1 0 0.5 1 l y IY EOS l y l y l y TH ER OW L IY EOS TH ER OW L IY EOS TH ER OW L IY EOS Target Output S o u rce I npu t Target Output Target Output Target Output Figure 1: Average attention positions between target output characters and source input characters and the corresponding monotonicity loss for different attention distributions, and with different margins .",
"These attention weights are then applied to obtain a weighted average c i of a vector of value states V : c i = | x | (cid:88) j =1 ij v j (2) For our monotonicity loss, we also compute the mean attended position a i : a i = | x | (cid:88) j =1 ij j (3) a",
"We can then define the monotonicity loss in pairwise fashion, comparing the mean attended position at time steps i and i + 1 :",
"is a hyperparameter that controls how deviations from the main diagonal are penalized.",
"Let us first consider the case with = 0 : if a i +1 a i for all positions i , i.e. if the mean attended position is weakly increasing 2 , then the loss is 0.",
"Any decrease 2 We can swap a i and a i +1 in equation 4 to bias the model towards monotonically decreasing attention.",
"in the mean attended position will incur a cost that is proportional to the amount of decrease, relative to the source sequence length; 3 this allows differentiation of the loss, and will also serve as a measure of the degree of monotonicity in the analysis.",
"We might want to bias the model towards strictly monotonic behavior, penalizing it if a remains unchanged over several time steps.",
"We can achieve this by incurring a loss if a does not increase by some margin, controlled by .",
"At the most extreme, with = 1 , the loss is minimized if the mean attended position follows the main diagonal of the alignment matrix, increasing by | X | | Y | at each time step.",
"Figure 1 shows how the margin can influ-ence the monotonicity loss with some examples.",
"In equation 4, costs are later summed over the target sequence.",
"In practice, we normalize the cost by the number of tokens in a batch for training stability, as is typically done for the cross-entropy loss.",
"If a model has multiple attention mechanisms, e.g. attention in multiple layers, or multihead attention, we separately compute the loss for each attention mechanism, then average the losses.",
"We can also just apply the loss to a subset of attention mechanisms, allowing different attention heads to learn specialized behavior (Voita et al., 2019).",
"3 Making the cost relative to the source sequence length ensures that the worst-case cost per timestep is independent of source sequence length.",
"We implement the loss function in sockeye (Hieber et al., 2018), and experiment with RNN and transformer models.",
"We list the specific baseline settings for each task in Appendix A.2.",
"The monotonic loss function is controlled by a hyperparameter for the margin ( ), and an additional scaling factor for the loss itself ( ).",
"Preliminary experiments have shown that the monotonicity loss has an undesirable interaction with attention dropout, which is commonly used in transformer models.",
"Randomly dropping attention connections during training makes it harder to reliably avoid a decrease in the mean attended position, favoring a degenerate local optimum where attention resides constantly on the first (or last) encoder state.",
"To avoid this problem, we use DropHead (Zhou et al., 2020) instead, which has a similar regularizing effect as attention dropout, but does not interact with the monotonicity loss.",
"In addition to the standard evaluation metrics used in each task, we provide the monotonicity loss on the test set and the percentage of target tokens for which the average source attention position has increased (by some margin).",
"We perform experiments on three word-level and one sentence-level sequence-to-sequence tasks: Grapheme-to-Phoneme Conversion For grapheme-to-phoneme conversion, we use NETtalk (Sejnowski and Rosenberg, 1987) 4 and CMUdict, 5 two datasets for English, with the same data split as Wu and Cotterell (2019).",
"For experiments with RNN models, we follow the settings in Wu et al. (2018b) (large configuration).",
"6 For experiments with transformer models, we follow the settings suggested in Wu et al. (2021), however, we use dropout rates of 0.3 (NETtalk) and 0.2 (CMUdict) instead of 0.1 and 0.3.",
"Furthermore, we use a smaller feed-forward dimension for the NETtalk models (512 instead of 1024), since this a relatively small dataset ( 14k samples).",
"For both RNN and transformer models, we use early stopping with phoneme error rate, as opposed to a minimum learning rate value as in Wu et al. 4 https://archive.ics.uci.edu/ml/ datasets/Connectionist+Bench+(Nettalk+ Corpus) 5 https://github.com/cmusphinx/cmudict 6 Even though we follow the settings in Wu et al. (2018b), our RNN models are smaller than theirs (4.5M vs. 8.6M parameters).",
"(2018b) and Wu et al. (2021).",
"We evaluate our models with word error rate (WER) and phoneme error rate (PER).",
"For morphological inflection, we use the CoNLL-SIGMORPHON 2017 shared task dataset.",
"7 We choose all 51 languages from the high-resource setting where the training data for each language consists of 10,000 morphological tags + lemma and inflected form pairs (except for Bengali and Haida which have 4,243 and 6,840 pairs respectively) and from the medium-resource setting with 1,000 training examples per language.",
"Our baselines performed very poorly on the low-resource setting with only 100 training examples and we decided to focus on the other two tasks instead.",
"We preprocess the data to insert a separator token between the morphological tags and the input lemma.",
"The monotonicity loss is then only computed on the positions to the right of the separator token's position.",
"We follow Wu et al. (2021) and use special positional encodings for the morphological tags in the transformer.",
"Unlike their approach, where the position for all tags was set to 0, we set the position of the separator token to 0 and sequentially decrease the positions of the morphological tags to the left (Figure 2).",
"This serves to stabilize the positional encodings of the lemma tokens, while still accounting for the fixed order of morphological tags in the dataset.",
"In preliminary experiments, we observed an improvement of 0.63% in accuracy over vanilla positional encodings.",
"We train models on character-level for morphological inflection following the previously recommended settings for RNNs in Wu et al. (2018b) 7 https://github.com/sigmorphon/ conll2017 and for transformers in Wu et al. (2021) (except for reducing the feed-forward dimension to 512 instead of 1024).",
"For the high resource datasets, we use a batch size of 400, for the medium resource datasets 200.",
"Early stopping is done in the same way as for grapheme-to-phoneme conversion.",
"We use the official evaluation script to compute word-level accuracy (ACC) and character-level edit distance (LEV).",
"For transliteration, we experiment on the NEWS2015 shared task data (Zhang et al., 2015) and use the same subset of 11 script pairs that Wu and Cotterell (2019) used in their experiments: AR-EN, EN-BA, EN-HI, EN-JA, EN-KA, EN-KO, EN-PE, EN-TA, EN-TH, JN-JK, and TH-EN.",
"Total training dataset sizes range from 6,761 source names for EN-KO up to 27,789 source names for EN-TH.",
"For certain script pairs, multiple transliterations per source name are acceptable.",
"We add all possible pairs to our training data, which only has a large effect on EN-AR, where there are on average 10 acceptable transliterations per source name.",
"Since the references of the official shared task test sets were not released, we follow Wu and Cotterell (2019) and use the development set as our test set.",
"We randomly sample 1,000 names from the training sets as our development sets for script pairs with more than 20,000 training examples and 100 for script pairs with fewer training examples.",
"Again, we follow Wu et al. (2018b) for hyperparameters in RNNs and Wu et al. (2021) in transformers (smaller feed-forward dimensions of 512).",
"We early stop training as for grapheme-to-phoneme conversion.",
"We evaluate our models following Zhang et al. (2015) and compute word-level accuracy (ACC) and character-level mean F-score (MFS).",
"The formula for MFS is in Appendix A.1.",
"For this work, we consider dialect normalization as a machine translation task from dialect to standard.",
"We work with the dataset described in Aepli and Clematide (2018), which consists of 26,015 crowd-sourced German translations of 6,197 original Swiss German sentences.",
"We use three documents (10%) as test sets and randomly split the rest in development and training set (10% and 80% re-spectively).",
"The alignment between Swiss German and the German translations is highly monotonic, but there are occasional word order differences, as es isch aber als Kompliment gmeint gsi es war aber als Kompliment gemeint it was however as compliment meant Figure 3: Swiss-German to German dialect normalization example with verb reordering.",
"The models are trained on subwords obtained via BPE (Sennrich et al., 2016), created with subword-nmt computing 2000 merges.",
"We treat this as a low-resource machine translation task, and thus follow hyperparameters by Sennrich and Zhang (2019) for the RNN models, while the transformer models are trained according to Araabi and Monz (2020).",
"We evaluate our models with BLEU (Papineni et al., 2002).",
"8 4.2 Results In addition to task-specific evaluation metrics, we use the loss function to score the monotonicity of the attention on the test set for all models (reported as LMONO ).",
"Furthermore, we report the percentage of decoding states for which the average source attention position a increases by at least | X | | Y | as % mono .",
"In other words, this is the percentage of states for which the pairwise loss is 0.",
"We test different settings on the grapheme-to-phoneme task, see Table 1 for results with RNNs (top) and transformers (bottom).",
"We find that models trained with the additional loss have more monotonic attention than the baselines (see % mono and LMONO ).",
"We observe large differences both in terms of WER and PER across multiple runs for the baseline, especially for the small data set.",
"9 We therefore report the average result of three runs with standard deviations for each model.",
"Attention in the RNN baselines is already quite monotonic, but we observe small improvements with = 0.5.",
"For transformer models, on the other hand, > 0 seems to harm the performance, therefore we only report results with = 0 .",
"In general, multihead attention in the transformer does not seem to benefit much from enforced monotonicity.",
"For morphological inflection, we show the average results over all 51 languages in Table",
"2. Our RNN baseline is slightly better than previous work, whereas our transformer baseline performs slightly worse.",
"We notice that the transformer models trained with = 0 on the morphological inflection tasks result in the model always attending to the same source position at every decoding state.",
"We therefore set to 0.1 for transformer models trained on this task.",
"For the remaining tasks, we report results with set to 0 and always set to 0.1 so as not to overfit hyperparameters on each task.",
"The baseline monotonicity loss for this task is higher than for grapheme-to-phoneme conversion but training with the monotonicity loss can drastically increase the monotonicity of the attention mechanisms.",
"This can be seen both in the lower monotonicity score and the higher percentage of decoding states where the average source attention position increases from the previous state.",
"In terms of performance, we do not see an improvement over the baselines.",
"Our results for transliteration are shown in Table 3 (average over all 11 datasets).",
"Again, we can see that the monotonicity loss effectively biases the attention towards a more monotonic behavior, decreasing the monotonicity score and increasing the percentage of decoding states where the average source attention position increases.",
"In terms of performance, there is a small gain for RNNs both in word-level accuracy and character-level mean F-score.",
"Training with the monotonicity loss does not improve the performance of the transformer compared to the baseline.",
"Since dialect normalization is our only sentence-level sequence-to-sequence task, it is interesting to see how the monotonicity loss works on longer sequences where more reordering is possible compared to the previous tasks.",
"The less monotonic nature of this task is reflected in the fact that neither of our models trained towards monotonicity outperforms the non-monotonic baselines, see Table",
"3. Dialect normalization is also the only task where the transformer does not outperform the RNN models.",
"Overall, our results show that the proposed monotonicity loss succeeds in making attention more monotonic, but effects on quality are more positive for RNNs than for transformers.",
"We now analyze the proposed loss function in more detail.",
"First, we plot the monotonicity score during training and compare how fast it decreases over time.",
"We find that the monotonicity score decreases very fast for the models trained with our loss function and then stays rather constant.",
"The baseline models show various behaviors: for some datasets and models, the score decreases over training time suggesting that the model does learn to attend more monotonically even without the loss.",
"For other data sets, the score is initially lower and increases over training time, and, for some, the score stays more or less constant.",
"What all baselines have in common, is that the monotonicity score oscillates much more than when trained with the monotonicity loss.",
"Figure 4 shows an example plot for the EN-JA transliteration dataset.",
"We can vary how much we constrain attention to be monotonic by varying the weight of the monotonicity loss function ( ).",
"We analyze how this Figure 5: Relative BLEU scores as a function of the monotonicity loss for dialect normalization with transformer (all heads).",
"influences the performance on dialect normalization.",
"Figure 5 shows that non-monotonic behavior (as defined by the monotonicity loss) can be reduced by a factor of 10-20 with stable or even slightly improving performance.",
"However, BLEU drops drastically for large .",
"This highlights the advantage of our loss function over hard monotonic attention.",
"Through we can regulate the degree of monotonicity in the attention mechanism, which can be beneficial for tasks where hard monotonic attention would be too strict.",
"Since we calculate the loss on each attention component separately, we can also limit its application to specific layers and heads (in the case of multihead attention).",
"We test how restricting the monotonic behavior to only one head per layer influences the performance of the transformer on our chosen tasks.",
"Results are presented in Table",
"4. We find that monotonicity on only one head generally improves performance compared to on all heads, except for dialect normalization.",
"For grapheme-to-phoneme conversion and morphological inflection in the medium resource setting, we even see performance gains over the baseline.",
"Our results support the belief that the flexibility of multihead attention is key to the success of the transformer.",
"If applied to all heads, the monotonicity loss reduces variability in the attention distribution of the different heads, i.e. with high , all heads attend to the same source position.",
"We suspect that this severely limits the capacity of transformer models and explains why rewarding monotonicity on only one head is beneficial.",
"These findings are also important in the context of the work by Voita et al. (2019) who find that attention heads tend to learn specialized functions.",
"Having one monotonic attention head could be a complementary way to encourage more diversity amongst heads, next to disagreement regularization (Li et al., 2018).",
"Indeed, we observe that for grapheme-to-phoneme conversion and dialect normalization the remaining heads trained without the monotonicity loss tend to become less monotonic.",
"Attention maps are particularly interesting for dialect normalization where 1) the transformer baseline has one of the highest monotonicity losses of all our models and 2) reordering of source and target tokens is possible.",
"Figure 6 shows the attention maps for our baseline transformer and the corresponding model trained with the monotonicity loss.",
"The bottom sentence is an example where the alignment between the source and the target is monotonic.",
"Here, the baseline does show tentative monotonic behavior but with the monotonicity loss, the attention follows the main diagonal much more closely.",
"The sentence on the top, on the other hand, contains a non-monotonic alignment.",
"For a correct alignment of the past tense of to be, the model needs to peek at the very last token before the full stop.",
"This is reflected in the baseline attention map where the attention at the second decoding step is highest on the third-to-last source position.",
"However, for our model trained with the monotonicity loss, the attention follows the main diagonal and fails to mirror the correct alignment.",
"Occasional reorderings like this may explain why the monotonicity loss did not work well for this task despite it being largely monotonic.",
"We propose a model-agnostic loss function that measures and rewards monotonicity and can easily be integrated into various attention mechanisms.",
"To achieve this, we track how monotonically the average position of the attention shifts over the source sequence across time steps.",
"We show that this loss function can be seamlessly integrated into RNNs as well as transformers.",
"Models trained with our monotonicity loss learn largely monotonic behavior without any specific changes to the attention mechanism.",
"While we see some performance gains in RNNs, our results show that biasing all attention heads in transformers towards monotonic behavior is undesirable.",
"However, a bias towards monotonicity may be helpful if applied to only a subset of Performance heads with LMONO heads without LMONO G2P WER PER % mono LMONO % mono LMONO baseline 27.79 0.24 7.00 0.09 77.0% 7.26e-02 = 0 .",
"For the future, we are interested in more sophisticated schedules for the monotonicity loss, possibly reducing over the course of training.",
"This would help to learn monotonic behavior in the early training stages but gives the model more flexibility to deviate from such an attention pattern if needed.",
"In this context, our loss function could also be used as an additional pretraining objective for transfer to very low-resource tasks.",
"We would also like to test our loss function on tasks where the alignment may be harder to learn, for example in multimodal models or for long sequences.",
"Finally, using our loss function as a way to measure monotonicity could be an interesting tool for interpretability research.",
"We thank the anonymous reviewers for their feedback.",
"This project has received funding from the Swiss National Science Foundation (project nos. 176727 and 191934)."
] | [
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"result",
"method",
"result",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"method",
"other",
"other"
] |
[
"To avoid giving wrong answers, question answering (QA) models need to know when to abstain from answering.",
"Moreover, users often ask questions that diverge from the model's training data, making errors more likely and thus abstention more critical.",
"In this work, we propose the setting of selective question answering under domain shift, in which a QA model is tested on a mixture of in-domain and out-of-domain data, and must answer (i.e., not abstain on) as many questions as possible while maintaining high accuracy.",
"Abstention policies based solely on the model's softmax probabilities fare poorly, since models are overconfident on out-of-domain inputs.",
"Instead, we train a calibrator to identify inputs on which the QA model errs, and abstain when it predicts an error is likely.",
"Crucially, the calibrator benefits from observing the model's behavior on out-of-domain data, even if from a different domain than the test data.",
"We combine this method with a SQuAD-trained QA model and evaluate on mixtures of SQuAD and five other QA datasets.",
"Our method answers 56% of questions while maintaining 80% accuracy; in contrast, directly using the model's probabilities only answers 48% at 80% accuracy.",
"Question answering (QA) models have achieved impressive performance when trained and tested on examples from the same dataset, but tend to perform poorly on examples that are out-of-domain (OOD) (Jia and Liang, 2017; Chen et al., 2017; Yogatama et al., 2019; Talmor and Berant, 2019; Fisch et al., 2019).",
"Deployed QA systems in search engines and personal assistants need to gracefully handle OOD inputs, as users often ask questions that fall outside of the system's training distribution.",
"While the ideal system would correctly answer all Dataset Distributions Example question Q: What can result from disorders of the immune system?",
"OOD questions, such perfection is not attainable given limited training data (Geiger et al., 2019).",
"Instead, we aim for a more achievable yet still challenging goal: models should abstain when they are likely to err, thus avoiding showing wrong answers to users.",
"This general goal motivates the setting of selective prediction, in which a model outputs both a prediction and a scalar confidence, and abstains on inputs where its confidence is low (El-Yaniv and Wiener, 2010; Geifman and El-Yaniv, 2017).",
"In this paper, we propose the setting of selective question answering under domain shift , which captures two important aspects of real-world QA:",
"(i) test data often diverges from the training distribution, and",
"(ii) systems must know when to abstain.",
"We train a QA model on data from a source distribution, then evaluate selective prediction performance on a dataset that includes samples from both the source distribution and an unknown OOD distribution.",
"This mixture simulates the likely scenario in which users only sometimes ask questions that are covered by the training distribution.",
"While the system developer knows nothing about the unknown OOD data, we allow access to a small amount of data from a third known OOD distribution (e.g., OOD examples that they can foresee).",
"We first show that our setting is challenging because model softmax probabilities are unreliable estimates of confidence on out-of-domain data.",
"Prior work has shown that a strong baseline for in-domain selective prediction is MaxProb, a method that abstains based on the probability assigned by the model to its highest probability prediction (Hendrycks and Gimpel, 2017; Lakshminarayanan et al., 2017).",
"We find that MaxProb gives good confidence estimates on in-domain data, but is overconfident on OOD data.",
"Therefore, MaxProb performs poorly in mixed settings: it does not abstain enough on OOD examples, relative to in-domain examples.",
"We correct for MaxProb's overconfidence by using known OOD data to train a calibrator a classifier trained to predict whether the original QA model is correct or incorrect on a given example (Platt, 1999; Zadrozny and Elkan, 2002).",
"While prior work in NLP trains a calibrator on in-domain data (Dong et al., 2018), we show this does not generalize to unknown OOD data as well as training on a mixture of in-domain and known OOD data.",
"Figure 1 illustrates the problem setup and how the calibrator uses known OOD data.",
"We use a simple random forest calibrator over features derived from the input example and the model's softmax outputs.",
"We conduct extensive experiments using SQuAD (Rajpurkar et al., 2016) as the source distribution and five other QA datasets as different OOD distributions.",
"We average across all 20 choices of using one as the unknown OOD dataset and another as the known OOD dataset, and test on a uniform mixture of SQuAD and unknown OOD data.",
"On average, the trained calibrator achieves 56 .",
"1% coverage (i.e., the system answers 56 . 1% of test questions) while maintaining 80% accuracy on answered questions, outperforming MaxProb with the same QA model ( 48 . 2% coverage at 80% accuracy), using MaxProb and training the QA model on both SQuAD and the known OOD data ( 51 . 8% coverage), and training the calibrator only on SQuAD data ( 53 . 7% coverage).",
"In summary, our contributions are as follows: (1) We propose a novel setting, selective question answering under domain shift, that captures the practical necessity of knowing when to abstain on test data that differs from the training data.",
"(2) We show that QA models are overconfident on out-of-domain examples relative to in-domain examples, which causes MaxProb to perform poorly in our setting.",
"(3) We show that out-of-domain data, even from a different distribution than the test data, can improve selective prediction under domain shift when used to train a calibrator.",
"Our setting combines extrapolation to out-of-domain data with selective prediction.",
"We also distinguish our setting from the tasks of identifying unanswerable questions and outlier detection.",
"Extrapolating from training data to test data from a different distribution is an important challenge for current NLP models (Yogatama et al., 2019).",
"Models trained on many domains may still struggle to generalize to new domains, as these may involve new types of questions or require different reasoning skills (Talmor and Berant, 2019; Fisch et al., 2019).",
"Related work on domain adaptation also tries to generalize to new distributions, but assumes some knowledge about the test distribution, such as unlabeled examples or a few labeled examples (Blitzer et al., 2006; Daume III, 2007); we assume no such access to the test distribution, but instead make the weaker assumption of access to samples from a different OOD distribution.",
"Selective prediction, in which a model can either predict or abstain on each test example, is a longstanding research area in machine learning (Chow, 1957; El-Yaniv and Wiener, 2010; Geifman and El-Yaniv, 2017).",
"In NLP, Dong et al. (2018) use a calibrator to obtain better confidence estimates for semantic parsing.",
"Rodriguez et al. (2019) use a similar approach to decide when to answer QuizBowl questions.",
"These works focus on training and testing models on the same distribution, whereas our training and test distributions differ.",
"Selective prediction under domain shift.",
"Other fields have recognized the importance of selective prediction under domain shift.",
"In medical applications, models may be trained and tested on different groups of patients, so selective prediction is needed to avoid costly errors (Feng et al., 2019).",
"In computational chemistry, Toplak et al. (2014) use selective prediction techniques to estimate the set of (possibly out-of-domain) molecules for which a reactivity classifier is reliable.",
"To the best of our knowledge, our work is the first to study selective prediction under domain shift in NLP.",
"Answer validation.",
"Traditional pipelined systems for open-domain QA often have dedicated systems for answer validationjudging whether a proposed answer is correct.",
"These systems often rely on external knowledge about entities (Magnini et al., 2002; Ko et al., 2007).",
"Knowing when to abstain has been part of past QA shared tasks like RespubliQA (Penas et al., 2009) and QA4MRE (Penas et al., 2013).",
"IBM's Watson system for Jeopardy also uses a pipelined approach for answer validation (Gondek et al., 2012).",
"Our work differs by focusing on modern neural QA systems trained end-to-end, rather than pipelined systems, and by viewing the problem of abstention in QA through the lens of selective prediction.",
"Calibration.",
"Knowing when to abstain is closely related to calibrationhaving a model's output probability align with the true probability of its prediction (Platt, 1999).",
"A key distinction is that selective prediction metrics generally depend only on relative confidencessystems are judged on their ability to rank correct predictions higher than incorrect predictions (El-Yaniv and Wiener, 2010).",
"In contrast, calibration error depends on the abso-lute confidence scores.",
"Nonetheless, we will find it useful to analyze calibration in Section 5.3, as miscalibration on some examples but not others does imply poor relative ordering, and therefore poor selective prediction.",
"Ovadia et al. (2019) observe increases in calibration error under domain shift.",
"Identifying unanswerable questions.",
"In SQuAD 2.0, models must recognize when a paragraph does not entail an answer to a question (Rajpurkar et al., 2018).",
"Sentence selection systems must rank passages that answer a question higher than passages that do not (Wang et al., 2007; Yang et al., 2015).",
"In these cases, the goal is to abstain when no system (or person) could infer an answer to the given question using the given passage.",
"In contrast, in selective prediction, the model should abstain when it would give a wrong answer if forced to make a prediction.",
"Outlier detection.",
"We distinguish selective prediction under domain shift from outlier detection, the task of detecting out-of-domain examples (Scholkopf et al., 1999; Hendrycks and Gimpel, 2017; Liang et al., 2018).",
"While one could use an outlier detector for selective classification (e.g., by abstaining on all examples flagged as outliers), this would be too conservative, as QA models can often get a non-trivial fraction of OOD examples correct (Talmor and Berant, 2019; Fisch et al., 2019).",
"Hendrycks et al. (2019b) use known OOD data for outlier detection by training models to have high entropy on OOD examples; in contrast, our setting rewards models for predicting correctly on OOD examples, not merely having high entropy.",
"Given an input x , the selective prediction task is to output ( y, c ) where y Y ( x ) , the set of answer candidates, and c R denotes the model's confidence.",
"Given a threshold R , the overall system predicts y if c and abstain otherwise.",
"The risk-coverage curve provides a standard way to evaluate selective prediction methods (El-Yaniv and Wiener, 2010).",
"For a test dataset D test , any choice of has an associated coverage the fraction of D test the model makes a prediction onand risk the error on that fraction of D test .",
"As decreases, coverage increases, but risk will usually also increase.",
"We plot risk versus coverage and evaluate on the area under this curve (AUC), as well as the maximum possible coverage for a desired risk level.",
"The former metric averages over all , painting an overall picture of selective prediction performance, while the latter evaluates at a particular choice of corresponding to a specific level of risk tolerance.",
"We deviate from prior work by considering the setting where the model's training data D train and test data D test are drawn from different distributions.",
"As our experiments demonstrate, this setting is challenging because standard QA models are overconfident on out-of-domain inputs.",
"To formally define our setting, we specify three data distributions.",
"First, p source is the source distribution, from which a large training dataset D train is sampled.",
"Second, q unk is an unknown OOD distribution , representing out-of-domain data encountered at test time.",
"The test dataset D test is sampled from p test , a mixture of p source and q unk : p test = p source + (1 ) q unk (1) for (0 , 1) .",
"We choose = 12 , and examine the effect of changing this ratio in Section 5.8.",
"Third, q known is a known OOD distribution , representing examples not in p source but from which the system developer has a small dataset D calib .",
"While our framework is general, we focus on extractive question answering, as exemplified by SQuAD (Rajpurkar et al., 2016), due to its practical importance and the diverse array of available QA datasets in the same format.",
"The input x is a passage-question pair ( p, q ) , and the set of answer candidates Y ( x ) is all spans of the passage p .",
"A base model f defines a probability distribution f ( y | x ) over Y ( x ) .",
"All selective prediction methods we consider choose y = arg max y (cid:48) Y ( x ) f ( y (cid:48) | x ) , but differ in their associated confidence c .",
"Recall that our setting differs from the standard selective prediction setting in two ways: unknown OOD data drawn from q unk appears at test time, and known OOD data drawn from q known is available to the system.",
"Intuitively, we expect that systems must use the known OOD data to generalize to the unknown OOD data.",
"In this section, we present three standard selective prediction methods for in-domain data, and show how they can be adapted to use data from q known .",
"The first method, MaxProb, directly uses the probability assigned by the base model to y as an estimate of confidence.",
"Formally, MaxProb with model f estimates confidence on input x as: c MaxProb = f ( y | x ) = max y (cid:48) Y ( x ) f ( y (cid:48) | x ) .",
"MaxProb is a strong baseline for our setting.",
"Across many tasks, MaxProb has been shown to distinguish in-domain test examples that the model gets right from ones the model gets wrong (Hendrycks and Gimpel, 2017).",
"MaxProb is also a strong baseline for outlier detection, as it is lower for out-of-domain examples than in-domain examples (Lakshminarayanan et al., 2017; Liang et al., 2018; Hendrycks et al., 2019b).",
"This is desirable for our setting: models make more mistakes on OOD examples, so they should abstain more on OOD examples than in-domain examples.",
"MaxProb can be used with any base model f .",
"We consider two such choices: a model f src trained only on D train , or a model f src+known trained on the union of D train and D calib .",
"For neural networks, another standard approach to estimate confidence is to use dropout at test time.",
"Gal and Ghahramani (2016) showed that dropout gives good confidence estimates on OOD data.",
"Given an input x and model f , we compute f on x with K different dropout masks, obtaining prediction distributions p 1 , . . . , p K , where each p i is a probability distribution over Y ( x ) .",
"We consider two statistics of these p i 's that are commonly used as confidence estimates.",
"First, we take the mean of p i ( y ) across all i (Lakshminarayanan et al., 2017): c DropoutMean = 1 KK (cid:88) i =1 p i ( y ) .",
"This can be viewed as ensembling the predictions across all K dropout masks by averaging them.",
"Second, we take the negative variance of the p i ( y ) 's (Feinman et al., 2017; Smith and Gal, 2018): c DropoutVar = Var[ p 1 ( y ) , . . . , p K ( y )] .",
"Higher variance corresponds to greater uncertainty, and hence favors abstaining.",
"Like MaxProb, dropout can be used either with f trained only on D train , or on both D train and the known OOD data.",
"Test-time dropout has practical disadvantages compared to MaxProb.",
"It requires access to internal model representations, whereas MaxProb only requires black box access to the base model (e.g., API calls to a trained model).",
"Dropout also requires K forward passes of the base model, leading to a K -fold increase in runtime.",
"Our final method trains a calibrator to predict when a base model (trained only on data from p source ) is",
"correct (Platt, 1999; Dong et al., 2018).",
"We differ from prior work by training the calibrator on a mixture of data from p source and q known , anticipating the test-time mixture of p source and q unk .",
"More specifically, we hold out a small number of p source examples from base model training, and train the calibrator on the union of these examples and the q known examples.",
"We define c Calibrator to be the prediction probability of the calibrator.",
"The calibrator itself could be any binary classification model.",
"We use a random forest classifier with seven features: passage length, the length of the predicted answer y , and the top five softmax probabilities output by the model.",
"These features require only a minimal amount of domain knowledge to define.",
"Rodriguez et al. (2019) similarly used multiple softmax probabilities to decide when to answer questions.",
"The simplicity of this model makes the calibrator fast to train when given new data from q known , especially compared to retraining the QA model on that data.",
"We experiment with four variants of the calibrator.",
"First, to measure the impact of using known OOD data, we change the calibrator's training data: it can be trained either on data from p source only, or both p source and q known data as described.",
"Second, we consider a modification where instead of the model's probabilities, we use probabilities from the mean ensemble over dropout masks, as described in Section 4.2, and also add c DropoutVar as a feature.",
"As discussed above, dropout features are costly to compute and assume white-box access to the model, but may result in better confidence estimates.",
"Both of these variables can be changed independently, leading to four configurations.",
"Data.",
"We use SQuAD 1.1 (Rajpurkar et al., 2016) as the source dataset and five other datasets as OOD datasets: NewsQA (Trischler et al., 2017), TriviaQA (Joshi et al., 2017), SearchQA (Dunn et al., 2017), HotpotQA (Yang et al., 2018), and Natural Questions (Kwiatkowski et al., 2019).",
"1 These are all extractive question answering datasets where all questions are answerable; however, they vary widely in the nature of passages (e.g., Wikipedia, news, web snippets), questions (e.g., Jeopardy and trivia questions), and relationship between pas-1 We consider these different datasets to represent different domains, hence our usage of the term domain shift. sages and questions (e.g., whether questions are written based on passages, or passages retrieved based on questions).",
"We used the preprocessed data from the MRQA 2019 shared task (Fisch et al., 2019).",
"For HotpotQA, we focused on multi-hop questions by selecting only hard examples, as defined by Yang et al. (2018).",
"In each experiment, two different OOD datasets are chosen as q known and q unk .",
"All results are averaged over all 20 such combinations, unless otherwise specified.",
"We sample 2,000 examples from q known for D calib , and 4,000 SQuAD and 4,000 q unk examples for D test .",
"We evaluate using exact match (EM) accuracy, as defined by SQuAD (Rajpurkar et al., 2016).",
"Additional details can be found in Appendix A.1.",
"QA model.",
"For our QA model, we use the BERT-base SQuAD 1.1 model trained for 2 epochs (De-vlin et al., 2019).",
"We train six models total: one f src and five f src+known 's, one for each OOD dataset.",
"Selective prediction methods.",
"For test-time dropout, we use K = 30 different dropout masks, as in Dong et al. (2018).",
"For our calibrator, we use the random forest implementation from Scikit-learn (Pedregosa et al., 2011).",
"We train on 1,600 SQuAD examples and 1,600 known OOD examples, and use the remaining 400 SQuAD and 400 known OOD examples as a validation set to tune calibrator hyperparameters via grid search.",
"We average our results over 10 random splits of this data.",
"When training the calibrator only on p source , we use 3,200 SQuAD examples for training and 800 for validation, to ensure equal dataset sizes.",
"Additional details can be found in Appendix A.2.",
"Training a calibrator with q known outperforms other methods.",
"Table 1 compares all methods that do not use test-time dropout.",
"Compared to MaxProb with f src+known , the calibrator has 4 .",
"3 points and 6 .",
"7 points higher coverage at 80% and 90% accuracy respectively, and 1 .",
"1 points lower AUC.",
"2 This demonstrates that training a calibrator is a better use of known OOD data than training a QA model.",
"The calibrator trained on both p source and q known also outperforms the calibrator trained on p source alone by 2 .",
"4% coverage at 80% accuracy.",
"All methods perform far worse than the optimal selective predictor with the given base model, though 2 95% confidence interval is [1 . 01 , 1 . 69] , using the paired bootstrap test with 1000 bootstrap samples.",
"Test-time dropout improves results but is expensive.",
"Table 2 shows results for methods that use test-time dropout, as described in Section 4.2.",
"The negative variance of p i ( y ) 's across dropout masks serves poorly as an estimate of confidence, but the mean performs well.",
"The best performance is attained by the calibrator using dropout features, which has 3 .",
"9% higher coverage at 80% accuracy than the calibrator with non-dropout features.",
"Since test-time dropout introduces substantial (i.e., K fold) runtime overhead, our remaining analyses focus on methods without test-time dropout.",
"The QA model has lower non-trivial accuracy on OOD data.",
"Next, we motivate our focus on selective prediction, as opposed to outlier detection, by showing that the QA model still gets a non-trivial fraction of OOD examples correct.",
"Table 3 shows the (non-selective) exact match scores 3 As the QA model has fixed accuracy < 100% on D test , it is impossible to achieve 0% risk at 100% coverage.",
"for all six QA models used in our experiments on all datasets.",
"All models get around 80% accuracy on SQuAD, and around 40% to 50% accuracy on most OOD datasets.",
"Since OOD accuracies are much higher than 0% , abstaining on all OOD examples would be overly conservative.",
"4 At the same time, since OOD accuracy is worse than in-domain accuracy, a good selective predictor should answer more in-domain examples and fewer OOD examples.",
"Training on 2,000 q known examples does not significantly help the base model extrapolate to other q unk distributions.",
"Results hold across different amounts of known OOD data.",
"As shown in Figure 2, across all amounts of known OOD data, using it to train and validate the calibrator (in an 8020 split) performs better than adding all of it to the QA training data and using MaxProb.",
"We now show why MaxProb performs worse in our setting compared to the in-domain setting: it is mis-calibrated on out-of-domain examples.",
"Figure 3a shows that MaxProb values are generally lower for OOD examples than in-domain examples, following previously reported trends (Hendrycks and Gimpel, 2017; Liang et al., 2018).",
"However, the MaxProb values are still too high out-of-domain.",
"Figure 3b shows that MaxProb is not well calibrated: it is underconfident in-domain, and overconfident out-of-domain.",
"5 For example, for a Max-4 In Section A.3, we confirm that an outlier detector does not achieve good selective prediction performance.",
"5 The in-domain underconfidence is because SQuAD (and some other datasets) provides only one answer at training time, but multiple answers are considered correct at test time.",
"In Ap-Train Data / Test Data SQuAD TriviaQA HotpotQA NewsQA NaturalQuestions SearchQA SQuAD only 80.95 48.43 44.88 40.45 42.78 17.98 SQuAD + 2K TriviaQA 81.48 (50.50) 43.95 39.15 47.05 25.23 SQuAD + 2K HotpotQA 81.15 49.35 (53.60) 39.85 48.18 24.40 SQuAD + 2K NewsQA 81.50 50.18 42.88 (44.00) 47.08 20.40 SQuAD + 2K NaturalQuestions 81.48 51.43 44.38 40.90 (54.85) 25.95 SQuAD + 2K SearchQA 81.60 56.58 44.30 40.15 47.05 (59.80) Table 3: Exact match accuracy for all six QA models on all six test QA datasets.",
"Prob of 0 .",
"6 , the model is about 80% likely to get the question correct if it came from SQuAD (in-domain), and 45% likely to get the question correct if it was OOD.",
"When in-domain and OOD examples are mixed at test time, MaxProb therefore does not abstain enough on the OOD examples.",
"Figure 3d shows that the calibrator is better calibrated, even though it is not trained on any unknown OOD data.",
"In Appendix A.5, we show that the calibrator abstains on more OOD examples than MaxProb.",
"Our finding that the BERT QA model is not overconfident in-domain aligns with Hendrycks et al. (2019a), who found that pre-trained computer vision models are better calibrated than models trained from scratch, as pre-trained models can be pendix A.4, we show that removing multiple answers makes MaxProb well-calibrated in-domain; it stays overconfident out-of-domain.",
"trained for fewer epochs.",
"Our QA model is only trained for two epochs, as is standard for BERT.",
"Our findings also align with Ovadia et al. (2019), who find that computer vision and text classification models are poorly calibrated out-of-domain even when well-calibrated in-domain.",
"Note that miscalibration out-of-domain does not imply poor selective prediction on OOD data, but does imply poor selective prediction in our mixture setting.",
"We next investigated how choice of q known affects generalization of the calibrator to q unk .",
"Figure 4 shows the percentage reduction between MaxProb and optimal AUC achieved by the trained calibrator.",
"The calibrator outperforms MaxProb over all dataset combinations, with larger gains when q known and q unk are similar.",
"For example, samples from TriviaQA help generalization to SearchQA and vice versa; both use web snippets as passages.",
"Samples from NewsQA, the only other non-Wikipedia dataset, are also helpful for both.",
"On the other hand, no other dataset significantly helps generalization to HotpotQA, likely due to HotpotQA's unique focus on multi-hop questions.",
"We determine the importance of each feature of the calibrator by removing each of its features individually, leaving the rest.",
"From Table 4, we see that the most important features are the softmax probabilities and the passage length.",
"Intuitively, passage length is meaningful both because longer passages have more answer candidates, and because passage length differs greatly between different domains.",
"We examined calibrator errors on two pairs of q known and q unk one similar pair of datasets and one dissimilar.",
"For each, we sampled 100 errors in which the system confidently gave a wrong answer (overconfident), and 100 errors in which the sys-Figure 4: Results for different choices of q known (y-axis) and q unk (x-axis).",
"For each pair, we report the percent AUC improvement of the trained calibrator over MaxProb, relative to the total possible improvement.",
"Datasets that use similar passages (e.g., SearchQA and TriviaQA) help each other the most.",
"Main diagonal elements (shaded) assume access to q unk (see Section 5.9).",
"tem abstained but would have gotten the question correct if it had answered (underconfident).",
"These were sampled from the 1000 most overconfident or underconfident errors, respectively.",
"q known = NewsQA, q unk = TriviaQA.",
"These two datasets are from different non-Wikipedia sources.",
"62% of overconfidence errors are due to the model predicting valid alternate answers, or span mismatchesthe model predicts a slightly different span than the gold span, and should be considered correct; thus the calibrator was not truly overconfident.",
"This points to the need to improve QA evaluation metrics (Chen et al., 2019).",
"45% of underconfidence errors are due to the passage requiring coreference resolution over long distances, including with the article title.",
"Neither SQuAD nor NewsQA passages have coreference chains as long or contain titles, so it is unsurprising that the calibrator struggles on these cases.",
"Another 25% of underconfidence errors were cases in which there was insufficient evidence in the paragraph to answer the question (as TriviaQA was constructed via distant supervision), so the calibrator was not incorrect to assign low confidence.",
"16% of all underconfidence errors also included phrases that would not be common in SQuAD and NewsQA, such as using said bye bye for banned. q known = NewsQA, q unk = HotpotQA.",
"These two datasets are dissimilar from each other in multiple ways.",
"HotpotQA uses short Wikipedia passages and focuses on multi-hop questions; NewsQA has much longer passages from news articles and does not focus on multi-hop questions.",
"34% of the overconfidence errors are due to valid alternate answers or span mismatches.",
"On 65% of the underconfidence errors, the correct answer was the only span in the passage that could plausibly answer the question, suggesting that the model arrived at the answer due to artifacts in HotpotQA that facilitate guesswork (Chen and Durrett, 2019; Min et al., 2019).",
"In these situations, the calibrator's lack of confidence is therefore justifiable.",
"We now study the relationship between selective prediction and identifying unanswerable questions.",
"Unanswerable questions do not aid selective prediction.",
"We trained a QA model on SQuAD 2.0 (Rajpurkar et al., 2018), which augments SQuAD 1.1 with unanswerable questions.",
"Our trained calibrator with this model gets 18 .",
"38 AUC, which is very close to the 18 .",
"47 for the model trained on SQuAD 1.1 alone.",
"MaxProb also performed similarly with the SQuAD 2.0 model ( 20 . 81 AUC) and SQuAD 1.1 model ( 20 . 54 AUC).",
"Selective prediction methods do not identify unanswerable questions.",
"For both MaxProb and our calibrator, we pick a threshold (cid:48) R and predict that a question is unanswerable if the confidence c < (cid:48) .",
"We choose (cid:48) to maximize SQuAD 2.0 EM score.",
"Both methods perform poorly: the calibrator (averaged over five choices of q known ) achieves 54 .",
"0 EM, while MaxProb achieves 53 .",
"1 EM.",
"6 These results only weakly outperform the 6 We evaluate on 4000 questions randomly sampled from the SQuAD 2.0 development set.",
"majority baseline of 48 .",
"9 EM.",
"Taken together, these results indicate that identifying unanswerable questions is a very different task from knowing when to abstain under distribution shift.",
"Our setting focuses on test data that is dissimilar to the training data, but on which the original QA model can still correctly answer a nontrivial fraction of examples.",
"In contrast, unanswerable questions in SQuAD 2.0 look very similar to answerable questions, but a model trained on SQuAD 1.1 gets all of them wrong.",
"Until now, we used = 12 both for D test and training the calibrator.",
"Now we vary for both, ranging from using only SQuAD to only OOD data (sam-pled from q known for D calib and from q unk for D test ).",
"Figure 5 shows the difference in AUC between the trained calibrator and MaxProb.",
"At both ends of the graph, the difference is close to 0, showing that MaxProb performs well in homogeneous settings.",
"However, when the two data sources are mixed, the calibrator outperforms MaxProb significantly.",
"This further supports our claim that MaxProb performs poorly in mixed settings.",
"We note that our findings do not hold in the alternate setting where we have access to samples from q unk (instead of q known ).",
"Training the QA model with this OOD data and using MaxProb achieves average AUC of 16 .",
"35 , whereas training a calibrator achieves 17 .",
"87 ; unsurprisingly, training on examples similar to the test data is helpful.",
"We do not focus on this setting, as our goal is to build selective QA models for unknown distributions.",
"In this paper, we propose the setting of selective question answering under domain shift, in which systems must know when to abstain on a mixture of in-domain and unknown OOD examples.",
"Our setting combines two important goals for real-world systems: knowing when to abstain, and handling distribution shift at test time.",
"We show that models are overconfident on OOD examples, leading to poor performance in the our setting, but training a calibrator using other OOD data can help correct for this problem.",
"While we focus on question answering, our framework is general and extends to any prediction task for which graceful handling of out-of-domain inputs is necessary.",
"Across many tasks, NLP models struggle on out-of-domain inputs.",
"Models trained on standard natural language inference datasets (Bowman et al., 2015) generalize poorly to other distributions (Thorne et al., 2018; Naik et al., 2018).",
"Achieving high accuracy on out-of-domain data may not even be possible if the test data requires abilities that are not learnable from the training data (Geiger et al., 2019).",
"Adversarially chosen ungrammatical text can also cause catastrophic errors (Wallace et al., 2019; Cheng et al., 2020).",
"In all these cases, a more intelligent model would recognize that it should abstain on these inputs.",
"Traditional NLU systems typically have a natural ability to abstain.",
"SHRDLU recognizes statements that it cannot parse, or that it finds ambiguous (Winograd, 1972).",
"QUALM answers reading comprehension questions by constructing reasoning chains, and abstains if it cannot find one that supports an answer (Lehnert, 1977).",
"NLP systems deployed in real-world settings inevitably encounter a mixture of familiar and unfamiliar inputs.",
"Our work provides a framework to study how models can more judiciously abstain in these challenging environments.",
"Reproducibility.",
"All code, data and experiments are available on the Codalab platform at https: //bit.ly/35inCah .",
"Acknowledgments.",
"This work was supported by the DARPA ASED program under FA8650-18-2-7882.",
"We thank Ananya Kumar, John Hewitt, Dan Iter, and the anonymous reviewers for their helpful comments and insights."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"method",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"objective",
"result",
"result",
"method",
"method",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain"
] |
[
"Abstract Question answering over temporal knowledge graphs (KGs) efficiently uses facts contained in a temporal KG, which records entity relations and when they occur in time, to answer natural language questions (e.g., Who was the president of the US before Obama?).",
"These questions often involve three time-related challenges that previous work fail to adequately address: 1) questions often do not specify exact timestamps of interest (e.g., Obama instead of 2000); 2) subtle lexical differences in time relations (e.g., before vs after); 3) off-the-shelf temporal KG embeddings that previous work builds on ignore the temporal order of timestamps, which is crucial for answering temporal-order related questions.",
"In this paper, we propose a time-sensitive question answering (TSQA) framework to tackle these problems.",
"TSQA features a timestamp estimation module to infer the unwritten timestamp from the question.",
"We also employ a time-sensitive KG encoder to inject ordering information into the temporal KG embeddings that TSQA is based on.",
"With the help of techniques to reduce the search space for potential answers, TSQA significantly outperforms the previous state of the art on a new benchmark for question answering over temporal KGs, especially achieving a 32% (absolute) error reduction on complex questions that require multiple steps of reasoning over facts in the temporal KG.",
"Temporal knowledge graphs (KGs) record the relations between entities and the timestamp or time period when such relation hold, e.g., in the form of a quadruple: (Franklin D. Roosevelt, position held, President of USA, [1933, 1945]).",
"This makes them a perfect source of knowledge to answer questions that involve knowledge of when certain events occurred as well as how they are related temporally (see Figure 1 for an example).",
"Unlike question Work done at JD AI Research.",
"answering (QA) over non-temporal KGs that is mainly concerned with relational inference, a core challenge in temporal KGQA is correctly identifying the time of reference mentioned explicitly or implicitly in the question, and locating relevant facts by jointly reasoning over relations and timestamps.",
"Inspired by work on relational KGQA (Huang et al., 2019; Saxena et al., 2020), where knowledge graph embeddings (Dasgupta et al., 2018; Garca-Durn et al., 2018; Goel et al., 2020; Wu et al., 2020; Lacroix et al., 2020) learned independently of question answering are used as input to KGQA models, previous work (Saxena et al., 2021) employs temporal KG embeddings to attack the problem of temporal KGQA.",
"Despite its relative success on simple temporal questions that directly queries facts in the KG with one out of the four facts left as the answer (e.g., When was Franklin D. Roosevelt the President of USA? or What position did Franklin D. Roosevelt hold between 1933 and 1945?), this approach still struggles to handle questions that require multiple steps of relational-temporal reasoning (e.g., the example in Figure 8017 1).",
"We identify three main challenges that hinder further progress on temporal KGQA.",
"Firstly, complex temporal questions often require inferring the correct point of reference in time, which is not considered by previous work.",
"For instance, to correctly answer the question in Figure 1, it is crucial that we first identify that World War II took place between 1939 and 1945, and look for entities with the desired relation with President of USA in the time interval specified by these times.",
"Secondly, unlike entity relations, which are usually expressed in natural language with a handful of content words that correspond well with their recorded relations in KGs (e.g., What position did ... hold ... vs the position held relation), temporal relations often involve just one or two prepositions (e.g., before or during) and are expressed only implicitly in temporal KGs (e.g., nowhere is it clearly stated that 1931 is earlier than, or before, 1934, by a gap of 3 years).",
"As a result, a small lexical change can drastically alter the temporal relation expressed by the question, and therefore the answer set.",
"Thirdly, previous work on temporal KGQA build on temporal KG embeddings, where each timestamp is assigned a randomly initialized vector representation that is jointly optimized with entity and relation representations to reconstruct quadruples in the KG from embeddings.",
"While sound as a standalone method for encoding knowledge in temporal KGs, this approach does not guarantee that the learned timestamp representations can recover implicit temporal relations like temporal orders or distance, which are crucial for temporal KGQA.",
"In this paper, we propose a time-sensitive question answering framework (TSQA) to address these challenges.",
"We first equip the temporal KGQA model with a time estimation module that infers the unstated timestamps from questions as the first step of reasoning, and feed the result into relational inference as a reference timestamp.",
"Even without explicit training data for this module, the explicit factorization of the problem yields significant improvement over previous work on complex questions that require reasoning over multiple temporal quadruples.",
"To improve the sensitivity of our question encoder to time relation words, we also propose auxiliary contrastive losses that contrast the answer prediction and time estimation for questions that differ only by the time relation word (e.g., before vs after).",
"By leveraging the mutual exclusiveness of answers and the prior knowledge regarding potential time estimates from different time relation words, we observe further improvements in model performance on complex questions.",
"Next, to learn temporal KG embeddings with prior knowledge of temporal order and distance built in, we introduce an auxiliary loss of time-order classification between each pair of timestamp embeddings.",
"As a result, the knowledge in the temporal KG can be distilled into the entity, relation, and timestamp embeddings where the timestamp embeddings can naturally recover order and distance information between the underlying timestamps, thus improving the performance of temporal KGQA where such information is crucial.",
"Finally, we enhance TSQA with KG-based approaches to narrow the search space to speed up model training and inference, as well as reduce the number of false positives in model prediction.",
"As a result, TSQA outperforms the previous state of the art on the CRONQUESTIONS benchmark (Saxena et al., 2021) by a large margin.",
"To summarize, our contributions in this paper are:",
"a) we propose a time-sensitive question answering framework (TSQA) that performs time estimation for complex temporal answers;",
"b) we present contrastive losses that improve model sensitivity to time relation words in the question;",
"c) we propose a time-sensitive temporal KG embedding approach that benefits temporal KGQA;",
"d) with the help of KG-based pruning technique, our TSQA model outperforms the previous state of the art by a large margin.",
"Temporal Knowledge Graph Embedding .",
"Knowledge graph embedding learning (Bordes et al., 2013; Yang et al., 2014; Trouillon et al., 2016; Dettmers et al., 2018; Shang et al., 2019; Sun et al., 2019; Tang et al., 2019; Ji et al., 2021) has been an active research area with applications directly in knowledge base completion and relation extractions.",
"Recently, there are several works that extended the static KG embedding models to temporal KGs.",
"Jiang et al. (2016) first attempt to extend TransE (Bordes et al., 2013) by adding a timestamp embedding into the score function.",
"Later, Hyte (Dasgupta et al., 2018) projects each timestamp with a corresponding hyperplane and utilizes the TransE score in each space.",
"Garca-Durn et al. (2018) extend TransE and 8018 DistMult by utilizing recurrent neural networks to learn time-aware representations of relation types.",
"TCompLEx (Lacroix et al., 2020) extends the ComplEx with time based on the canonical decomposition of tensors of order",
"4. Temporal QA on Knowledge Graph .",
"Temporal QA have mostly been studied in the context of reading comprehension.",
"ForecastQA (Jin et al., 2021) formulates the forecasting problem as a multiple-choice question answering task, where both the articles and questions include the timestamps.",
"The recent released TORQUE (Ning et al., 2020) is a dataset that explores the temporal ordering relations between events described in a passage of text.",
"Another direction is the temporal question answering over knowledge bases (KB) (Jia et al., 2018b,a), which retrieves time information from the KB.",
"TempQuestions (Jia et al., 2018a) is a KGQA dataset specifically aimed at temporal QA.",
"Based on this dataset, Jia et al. (2018b) design a method that decomposes and rewrites each question into nontemporal sub-question and temporal sub-question.",
"Here the KG used in TempQuestions is based on a subset of FreeBase which is not a temporal KG.",
"Later Jia et al. (2021) proposes a first end-to-end system (EXAQT) for answering complex temporal questions, which takes advantage of the question-relevant compact subgraphs within the KG, and relational graph convolutional networks (Schlichtkrull et al., 2018) for predicting the answers.",
"All previous datasets only include a limited number of temporal questions.",
"Recently, a much larger temporal KGQA dataset CRONQUESTIONS (Saxena et al., 2021) is released, which includes both the temporal questions and the temporal KG with time annotation for all edges.",
"Based on this dataset, the CronKGQA model (Saxena et al., 2021) is presented that exploits recent advances in Temporal KG embeddings and achieves performance superior to all baselines.",
"In this section, we first give the problem definition of temporal question answering over temporal knowledge graph.",
"Then, we introduce the framework to solve this problem, which integrates time sensitivity into KG embedding and answer inference.",
"Finally, we describe the key modules of our proposed system in details.",
"QA on Temporal KG aims at finding out the answer from a given temporal KG G = ( V , E , R , T ) for a given free-text temporal question Q containing implicit temporal expression, and the answer is either an entity of entity set V or a timestamp of timestamp set T .",
"Here, E V V is a set of edges, and R is the set of relations.",
"Edge from a quadruple ( s, r, [ t s , t e ] , o ) indicates the relation r R holds between subject entity s and object entity o during time interval [ t s , t e ] ( t s < t e and t e/s T ).",
"Framework .",
"Our framework resorts to KG embeddings along with pretrained language models to perform temporal KGQA.",
"Figure 2 shows the architecture which consists of two modules: 1) time-aware TKG encoder; 2) time-sensitive question answer.",
"The time-aware TKG encoder extends the existing TKG embedding method by adding an auxiliary time-order learning task to consider the quadruple orders.",
"And the time sensitive QA module first performs neighboring graph extraction to reduce the search space for question answer, then performs joint training for answer/time prediction and time-sensitive contrastive learning to enhance the model ability in capturing temporal signals in free-text question.",
"Next, we will introduce these two modules in details.",
"We first briefly review a time-aware KG embedding method based on TCompLEx (Lacroix et al., 2020) since it has been used in (Saxena et al., 2021) for TKGQA and shows competitive performance.",
"Next, we show that how to perform TCompLEx on temporal KG, then analyze its weakness in TKGQA especially for complex question and further overcome such weakness by introducing an auxiliary time-order learning task in TKG embedding.",
"TCompLEx for TKG .",
"TCompLEx is an extension of ComplEx considering time information, which not only encodes the entity and relation to complex vectors, but also maps each timestamp to a complex vector.",
"To perform TCompLEx over temporal KG in our problem definition, we first reformulate each quadruple to a set of new quadruples by ( s, r, [ t s , t e ] , o ) = { ( s, r, t, o ) | t s t t e } (1) Let e s , e r , e t , e o C d be the complex-value embeddings of s, r, t, o , respectively.",
"Then, TCom-8019 Figure 2: The architecture of our TSQA model ( Left : Time-aware TKG encoder; Right : Time-Sensitive TKG-QA).",
"pLEx scores each quadruple ( s, r, t, o ) by S ( s, r, o, t ) = Re ( e s , e r , e o , e t ) (2) where Re(.) denotes the real part of a complex vector, and denotes the multi-linear product.",
"Finally, we use a loss function similar to the negative sampling loss for effectively TCompLEx training.",
"LTC = log ( ( S ( s, r, o, t ))) 1 KK (cid:88) i =1 ( log ( ( S ( s i , r, o i , t i ) ))) , (3) where is a fixed margin, is the sigmoid function, ( s i , r, o i , t i ) is the i -th negative quadruple.",
"According to the loss function in equation 3, we observe that TCompLEx only cares about whether the quadruple is true or false and ignores the orders of different quadruples occur.",
"However, the time orders are critical to find the correct answer in knowledge graphs.",
"For example, to answer the Who is the President of USA before William J.",
"Clinton?, we need not only the two facts (President of USA, Position Held, Ronald Reagan, [1981, 1989]) and (President of USA, Position Held, William J. Clinton, [1993, 2001]), but also the time order of these facts.",
"To overcome such a limit of TCompLEx in TKGQA, we introduce an auxiliary time-order learning task over time-embeddings.",
"Time-order learning in TKG .",
"To keep the time order in embedding spaces, we first sort the timestamps in T by an ascending order and get ( t 1 , t 2 , , t | T | ) and t i < t j if 1 i < j | T | .",
"Let t i = [ Re ( e t i ) , Im ( e t i )] R 2 d be the concatenation the real and imaginary components of embedding e t i of timestamp t i .",
"Inspired by position embedding in (Vaswani et al., 2017), we first initialize the timestamp embedding t i as follows.",
"t i [2 k ] = sin( i 10000 2 k/ 2 d ) t i [2 k + 1] = cos( i 10000 2 k/ 2 d ) (4) where 0 k d 1 .",
"Afterwards, for any pair of timestamps ( t i , t j ), we calculate the probability of time order as: p t ( i, j ) = sigmoid (( t 1 t 2 ) TW t ) , (5) where W t R 2 d represents a parameter vector.",
"Based on the time-order probabilities, we introduce a binary cross-entropy loss as a time-order constraint over timestamp embeddings as follow: LTO = ( i, j ) log( p t ( i, j )) (1 ( i, j )) log(1 p t ( i, j )) , (6) where ( i, j ) = 1 if t i < t j else ( i, j ) = 0 .",
"Joint-training .",
"A weighted sum of T-CompLEx training loss and time-order constraint is considered as the final objective function for the joint training for TKG embedding.",
"In this section, we introduce our time-sensitive question answering module from the following aspects in details: 1) question decomposition which",
"divides the questions as entities and relations described in free-text; 2) entity neighboring subgraph extraction which reduces the search space of candidate timestamps and answer entities; and 3) time-sensitive question answer which explores the time information implied in both KG and questions to help the model find the answer.",
"For each question Q , we first identify all the entities {Ent 1 , Ent 2 , , Ent k } in Q which also appear in KG G , i.e., Ent i E ( 1 i k ).",
"Then, by replacing the entities in question Q with special token [ subject ] and [ object ] in order, we obtain an entity-independent temporal relation description in free-text named temporal expression Q .",
"Taking the question When did Obama hold the position of President of USA ? as an example, by replacing the identified entities President of USA and Obama, we get its temporal expression as When did subject hold the position of object ?.",
"Next [CLS] + Q are fed into BERT that outputs [CLS] token embedding as e q R d bert , where d bert is the output dimension of BERT, and two kinds of question representations as follows.",
"where q r , q t R 2 d represents the embedding of relation and time implied in question, respectively.",
"W R d bert 2 d , W rq , W tq R 2 d 2 d are the parameter matrix, and represents the activation function.",
"Finally, to facilitate the calculation with KG embeddings, we reformulate q r , q t in complex space as: q r = q r [0 : d ] + 1 q r [ d : 2 d ] (9) q t = q t [0 : d ] + 1 q t [ d : 2 d ] (10) 3.3.2 Entity Neighbor Graph Extraction Let {Ent 1 , Ent 2 , , Ent k } be the k entities extracted from question Q , we first extract the m hop neighboring sub-graph G i for each entity Ent i .",
"Then, by combining these k sub-graphs, we obtain the search graph G q for question Q : G q = ki =1 G i .",
"Suppose that E q and T q are the sets of entities and timestamps appearing in G q , respectively, they constitute the search space of time and entity prediction in our TKG-QA method.",
"In training stage, we set the hop number m as the minimum value which results in correct answer entity appearing in G q .",
"In testing stage, we set m as the largest hop number used in training stage.",
"In practice, the size of graph G q in usually much smaller than that of whole graph G .",
"For example, in CronKGQA, the average value of | G q | / | G | is about 3%.",
"Entity Neighboring graph extraction aims at reducing the search space of candidate timestamps and answer entities.",
"This results in not only more efficient training procedure, but also performance improvement of question answer because a larger number of candidates usually means a much more difficult learning problem.",
"For temporal question answer over KG, the interaction of time and answer entity prediction is very important since the time range brings a strong constraint on the search space of answers.",
"However, the existing method (Saxena et al., 2021) usually performs such two predictions independently which results in poor performance especially for complex questions which need to consider multiple facts to get the answer.",
"To overcome this limitation, we directly feed the intermediate time representation t q learned from time estimation to answer prediction to enhance the interaction of these two tasks.",
"Time Estimation.",
"Based on the embeddings e s and e o of subject entity s and object entity o from KG and the time embedding q t from a question, we design the time estimation function FT for learning the time embedding t q as follows: 1 t q = FT ( e s , q t , e o ) = W tq ([ Re ( e s , q t , e o ) , Im ( e s , q t , e o )]) , (11) where W tq R 2 d 2 d represents the parameter matrix.",
"[.] is the concatenation function, Re ( . ) denotes the real part of a complex vector and Im ( . ) is the imaginary part.",
"After getting the time embedding w.r.t. question t q , for timestamp prediction, the following score function to estimate the score for each timestamp t T q as follow: S t = Re ( t q , t ) (12) Entity Prediction .",
"1 A simple temporal question might contain the timestamp (e.g. 2001).",
"In this case, we set t q as the linear combination of this learned time embedding and the timestamp embedding from KG.",
"update the embedding of entity w.r.t. question by considering time embedding t q by an entity function FE as follow:",
"e q = FE ( e s , q r , t q ) = e s , q r , t q (13) Finally, we score the entity e E q by: S e = Re ( e q , e ) (14)",
"The answer entity of the question is either timestamp or entity.",
"Let S a be the answer score and thus S a = S t or S e when the answer is timestamp or entity.",
"Suppose C represents the number of candidate answers (i.e., C = | E q | + | T q | ), then we can define the probability of i -th candidate answer being true as: P a,i = exp( S a,i ) (cid:80) Cj =1 exp( S a,j )) .",
"Finally, we train the answer prediction model by minimizing the cross-entropy loss as follow: L answer = C (cid:88) i y i log( P a,i ) , (16) where y i = 1 if the i -th candidate is the true answer, otherwise y i = 0. 3.3.4 Temporal Contrastive Learning The temporal question answer system should be sensitive to the temporal relation implied in the question.",
"For example, the answer of What does happen before a given event? is quite different from that of What does happen after a given event?.",
"Existing works on TKG-QA usually resort to pre-trained language models for question understanding.",
"But these models are not sensitive to the difference of temporal expressions in free-text (Ning et al., 2020; Dhingra et al., 2021; Shang et al., 2021; Han et al., 2021), and thus prone to wrong predictions.",
"To make the system sensitive to the temporal relation implied in question, we resort to a contrastive learning method: we construct a contrastive question to the original question, then add auxiliary contrastive learning tasks to distinguish the latent temporal representation and prediction results coming from the pair of contrastive questions.",
"Contrastive Question Generation .",
"To generate the contrastive question Q for the given question Q , we first extract all the temporal words based on large number of questions in temporal question answer dataset, and then build a contrastive word pair dictionary by finding the antonyms.",
"The dictionary consists of D contr = {(first, last), (before, after), (before, during), (during, after), (before, when), (when, after)}.",
"Based on such dictionary, we replace the temporal word in given question Q by its antonym to generate its contrastive question Q .",
"Contrastive time order learning .",
"For the contrastive question pair Q and Q , we follow the same encoder in Eq.",
"11 to get the corresponding time-aware embeddings t q and t q c , respectively.",
"Meanwhile, according to the contrastive temporal word pair dictionary, suppose that we pickup the pair (word 1 , word 2 ) D contr for contrastive question construction, we can construct a question order label y o : y o = 0 if Q is achieved by replacing word 1 as word 2 , else y o = 1. Afterward, we distinguish the temporal orders implied by word 1 and word 2 by predicting of the order label y o based on t q and t q c as follow: p o = sigmoid (( t q t q c ) TW o ) (17) L order = y o log( p o ) (1 y o ) log(1 p o ) , (18) where W o R 2 d represents the parameter vector to be learned.",
"Answer-guided Contrastive Learning .",
"Let S = [ s 1 , , s C ] , S = [ s 1 , , s C ] be the answer scores w.r.t. questions Q and its contrastive question Q , respectively, where C = | E q | + | T q | .",
"By stacking these two scores together, we get S q = [ S ; S ] R 2 C .",
"Then, we can apply softmax over S q along the last dimension and get the probability scores P q = softmax ( S q ) R 2 C and sum ( P q [: , i ]) = 1 for i = 1 , , C .",
"Due to the fact that the answers of question Q are definitely not for question Q , we construct an answer-guided learning labels as y a = [ y 1 , , y C ] , where y i = 1 if and only if the i -th candidate is true answer for Q , otherwise y i = 0. Then, we get an answer-guided contrastive loss as follow: L contrast = 1 CC (cid:88) j =0 y i log( P q [0 , i ]) (19) Joint Training .",
"We combine the answer prediction loss and contrastive losses as the final objective function for joint training: Loss = L answer + o L order + c L contrast , (20) where o > 0 , c > 0 are the weight factors to make tradeoffs between different losses.",
"In this section, we conduct experiments to assess the effectiveness of our proposed method TSQA for TKG-QA.",
"Our experimental results show that our approach obtains significant improvements over the baseline models.",
"Data .",
"CRONQUESTIONS 2 is the largest known Temporal KGQA dataset consisting of two parts: a KG with temporal annotations, and a set of free-text questions requiring temporal reasoning.",
"This Temporal KG has 125k entities and 328k facts (quadru-ples), while a set of 410k questions is given.",
"The facts have the time spans in the edge.",
"These time spans or timestamps were discretized to years.",
"This dataset consists of questions that can be categorized into two groups based on their answer type: entity questions where the answer is an entity in the KG, and time questions where the answer is a timestamp.",
"The authors also categorize these questions into simple reasoning (including simple entity and simple time subtypes) and complex reasoning (including before/after , first/last and time join subtypes).",
"Table 1 provides the number of questions across different categories.",
"Complex questions require complex temporal reasoning which takes advantage of multiple facts and temporal order of these facts.",
"Evaluation Metrics include Hits@1 and Hits@10 , which is the standard evaluation metrics on CRONQUESTIONS (Saxena et al., 2021).",
"Hyper-parameter setting .",
"We train the TSQA models by setting the hyper-parameters as: learning rate = { 1 e 4 , 2 e 5 , 1 e 5 } , o = { 0.5, 1.0, 2.0, 3.0, 5.0 } and c = { 0.5, 1.0, 2.0, 3.0, 5.0 } , and pick up 2 https://github.com/apoorvumang/ CronKGQA the best hyper-parameters on dev set by the overall Hits@1 metrics.",
"Our models are implemented by PyTorch and trained using NVIDIA Tesla V100 GPUs.",
"EmbedKGQA (Saxena et al., 2020) is the first method to use KG embeddings for the multi-hop KGQA task.",
"It uses ComplEx (Trouillon et al., 2016) embeddings and can only deal with nontemporal KGs and single entity questions.",
"T-EaE-add/replacement (Saxena et al., 2021) are two modifications of KG enhanced language model EaE (Fvry et al., 2020), which integrates entity knowledge into a transformer-based language model and has been used for TKGQA (Saxena et al., 2020).",
"T-EaE-add has all grounded entities and time spans marked in the question, and T-EaE-replace replaces the BERT embeddings with the entity/time embeddings instead of adding them with token embeddings.",
"CronKGQA (Saxena et al., 2021) extends EmbedKGQA to the temporal QA task, and takes advantage of the temporal KG embeddings to answering temporal questions.",
"This is the current SOTA model on CRONQUESTIONS .",
"Table 2 compares different TKG-QA methods in terms of Hits@1 and Hits@10.",
"From this table, we observe that: 1) our proposed TSQA has achieved state-of-the-art performance in terms of all types of questions on both Hits@1 and Hits@10.",
"2) The performance improvement over the SOTA model is significant.",
"TSQA outperforms the SOTA results by more than 82% Hits@1 relative improvement (32% absolute error reduction) on complex questions and 21% Hits@10 relative improvement on simple questions.",
"These results proved the excellent performance of our proposed TSQA on question answering on the temporal knowledge graph, especially for complex temporal reasoning.",
"We also compare our method with baselines in terms of Hits@1 on different subtype questions in Table",
"3. From this table, we observe that: on complex questions, our proposed TSQA model outperforms all baseline models significantly.",
"The relative improvement is up to 75%, 94%, 56%, for before/after, first/last and Time Joint, respectively.",
"The first two kinds of questions are more 8023 Model Hits@1 Hits@10 Question Type Answer Type Question Type Answer Type Overall Complex Simple Entity Time Overall Complex Simple Entity Time EmbedKGQA 0.288 0.286 0.290 0.411 0.057 0.672 0.632 0.725 0.850 0.341 T-EaE-add 0.278 0.257 0.306 0.313 0.213 0.663 0.614 0.729 0.662 0.665 T-EaE-replace 0.288 0.257 0.329 0.318 0.231 0.678 0.623 0.753 0.668 0.698 CronKGQA 0.647 0.392 0.987 0.699 0.549 0.884 0.802 0.992 0.898 0.857 TSQA 0.831 0.713 0.987 0.829 0.836 0.980 0.968 0.997 0.981 0.978 Table 2: Comparison of different TKG-QA models on CRONQUESTIONS dataset.",
"challenging as they require a better understanding of the temporal expressions in question.",
"Our method is better in capturing such time-sensitivity change in temporal words and thus results in great improvement.",
"Moreover, for the simple questions, our method still keeps competitive performance compared to the SOTA model.",
"To understand the contributions of the proposed modules in our method, we perform an ablation study by sequentially removing the following components from our proposed TSQA: temporal Contrastive learning (TC), time-aware TKG embeddings (TKE), entity neighboring graph extractor (NG), and time estimation for question answer (TE) in Table",
"4. It is noted that removing TKE means that we replace TKE with T-CompLEx as KG encoder, and removing NG means that we perform QA over the whole knowledge graph.",
"By comparing the two adjacent rows of this table, we can infer the contributions of TC, TKE, NG and TE, respectively: 1) all these modules improve the overall performance in terms of Hits@1, especially for complex questions; 2) by comparing the last two adjacent rows, the proposed time estimation brings significant Hits@1 improvement (14.5%), since this module supplies the latent time embedding which not only enhances the interaction of timestamp estimation and answer estimation but also supplies a good anchor for finding the answer entity, which is very crucial for answering complex questions; 3) entity neighboring graph extraction gets 7.8% Hits@1 improvement over complex questions by comparing rows TC-TKE and TC-TKE-NG, since it significantly narrows down the search space of the candidate answers; 4) by comparing the first three rows, time-aware TKG embedding (TKE) and temporal contrastive learning (TC) further boost the Hits@1 over complex questions.",
"This is because the complex questions usually require the model to capture time ordering information implied in temporal words of the question.",
"And these two modules enhance temporal order learning by adding explicit time-order constraints.",
"In this paper, we propose a time-sensitive question answering framework (TSQA) over temporal knowledge graphs (KGs).",
"To facilitate the reasoning over temporal and relational facts over multiple facts, we propose a time estimation component to infer the unstated timestamp in the question.",
"To further improve the model's sensitivity to time relation words in the question and facilitate temporal reasoning, we enhance the model with a temporal KG encoder that produces KG embeddings that can recover the implicit temporal order and distance between different timestamps, and with contrastive losses that compare temporally exclusive questions.",
"With the help of answer search space pruning from entity neighboring sub-graphs, our TSQA model significantly improves the performance on complex temporal questions that require reasoning over multiple pieces of facts, and outperforms the previous state of the art by a large margin.",
"This work is supported by the Research and Development Grant No. 2020AAA0108600.",
"Zhen Jia, Abdalghani Abujabal, Rishiraj Saha Roy, Jan-nik Strtgen, and Gerhard Weikum.",
"2018a.",
"Tempquestions: A benchmark for temporal question answering.",
"In Companion Proceedings of the The Web Conference 2018 , pages 10571062.",
"Zhen Jia, Abdalghani Abujabal, Rishiraj Saha Roy, Jan-nik Strtgen, and Gerhard Weikum.",
"2018b.",
"Tequila: Temporal question answering over knowledge bases.",
"In Proceedings of the 27th ACM International Conference on Information and Knowledge Management , pages 18071810.",
"Zhen Jia, Soumajit Pramanik, Rishiraj Saha Roy, and Gerhard Weikum.",
"2021.",
"Complex temporal question answering on knowledge graphs.",
"In Proceedings of the 30th ACM International Conference on Information & Knowledge Management , pages 792802.",
"Tingsong Jiang, Tianyu Liu, Tao Ge, Lei Sha, Baobao Chang, Sujian Li, and Zhifang Sui.",
"2016.",
"Towards time-aware knowledge graph completion.",
"In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers , pages 17151724.",
"Timothe Lacroix, Guillaume Obozinski, and Nicolas Usunier.",
"2020.",
"Tensor decompositions for temporal knowledge base completion.",
"arXiv preprint arXiv:2004.04926 .",
"Qiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, and Dan Roth.",
"2020.",
"Torque: A reading comprehension dataset of temporal ordering questions.",
"arXiv preprint arXiv:2005.00242 .",
"Apoorv Saxena, Soumen Chakrabarti, and Partha Talukdar.",
"2021.",
"Question answering over temporal knowledge graphs.",
"arXiv preprint arXiv:2106.01515 .",
"Apoorv Saxena, Aditay Tripathi, and Partha Talukdar.",
"2020.",
"Improving multi-hop question answering over knowledge graphs using knowledge base embeddings.",
"In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 4498 4507.",
"Chao Shang, Peng Qi, Guangtao Wang, Jing Huang, Youzheng Wu, and Bowen Zhou."
] | [
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"result",
"method",
"abstain",
"method",
"abstain",
"objective",
"objective",
"result",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Event Detection (ED) aims to identify event trigger words from a given text and classify it into an event type.",
"Most of current methods to ED rely heavily on training instances, and almost ignore the correlation of event types.",
"Hence, they tend to suffer from data scarcity and fail to handle new unseen event types.",
"To address these problems, we formulate ED as a process of event ontology population: linking event instances to pre-defined event types in event ontology, and propose a novel ED framework entitled OntoED with ontology embedding.",
"We enrich event ontology with linkages among event types, and further induce more event-event correlations.",
"Based on the event ontology, OntoED can leverage and propagate correlation knowledge, particularly from data-rich to data-poor event types.",
"Furthermore, OntoED can be applied to new unseen event types, by establishing linkages to existing ones.",
"Experiments indicate that OntoED is more predominant and robust than previous approaches to ED, especially in data-scarce scenarios.",
"Event Detection (ED) (Chen et al., 2015) is the task to extract structure information of events from unstructured texts.",
"For example, in the event mention Jack is married to the Iraqi microbiologist known as Dr. Germ. , an ED model should identify the event type as Marry ' where the word married ' triggers the event.",
"The extracted events with canonical structure facilitate various social applications, such as biomedical science (Li et al., 2019; Wang et al., 2020c), financial analysis (Deng et al., 2019; Liang et al., 2020), fake news detection (Wang et al., 2018; Nikiforos et al., 2020) and so on.",
"As a non-trivial task, ED suffers from the low-resource issues.",
"On the one hand, the maldistribuEqual Contribution.",
"tion of samples is quite serious in ED benchmark datasets, e.g. , FewEvent (Deng et al., 2020) and MAVEN (Wang et al., 2020b), where a large portion of event types contain relatively few training instances.",
"As shown in Figure 1, the sample size of two event types Attack and Riot differs greatly (4816 & 30).",
"In low-resource scenarios, supervised ED models (Chen et al., 2015; Nguyen et al., 2016; Liu et al., 2018) are prone to overfitting since they require sufficient training instances for all event types.",
"On the other hand, real-world applications tend to be open and evolve promptly, and accordingly there can be numerous new unseen event types.",
"Handling new event types may even entail starting over, without being able to re-use annotations from previous ones (Huang et al., 2018).",
"Regarding low-resource ED, Huang et al. (2018) take a fresh look at ED, by mapping each event mention to a specific type in a target event ontology, which can train from few seen event types and then transfer knowledge to new unseen ones.",
"However, the event ontology here merely considers the intra-structure for each event mention and event type.",
"In this paper, we enrich the event ontology with more inter-structures of event types, such as temporal, causal and hierarchical event-event relations (Ning et al., 2018; Wang et al., 2020a).",
"For example, as seen in Figure 1, Attack CAUSE Sentence , Sentence BEFORE Acquit , Attack COSUPER Riot .",
"Our key intention is to fully utilize the event ontology and leverage correlation knowledge from data-rich event types ( i.e. , Attack ) to data-poor ones ( i.e. , Sentence , Acquit and Riot ).",
"Besides, new event types ( i.e. , Be-Born ) can be learned with correlation ( i.e. , COSUPER ) of existing ones ( i.e. , Injure ).",
"As the first attempt to construct such event ontology, we propose a novel ED framework with ontology embedding called OntoED.",
"First, we establish the initial event ontology with event instances and types.",
"We capture semantic features and relations of event instances with BERT (Devlin et al., 2019) and utilize prototypes (Snell et al., 2017) to represent event types, where a prototype is the average of its instance embeddings.",
"Second, we extend the event ontology with event-event relations based on extracted relations among event instances, and then learn ontology embedding by aggregating neighbor prototypes for each prototype w.r.t. correlations among event types.",
"In this way, semantically similar event types in vector space will be closer, thus, improving the discrimination of dissimilar event types.",
"Third, we design an event correlation inference mechanism to induce new event correlations based on symbolic rules, e.g. , ( Sentence , BEFORE , Acquit ) ( Acquit , BEFORE , Pardon ) ( Sentence , BEFORE , Pardon ).",
"Thus, we can induce new event-event relations to further enrich the event ontology.",
"To the best of our knowledge, it is the first work to explicitly model correlations among event types with event ontology in low-resource ED.",
"Our contributions can be summarized as follows: We study the low-resource event detection problem and propose a novel ontology-based model, OntoED, that encodes intra and inter structures of events.",
"We provide a novel ED framework based on ontology embedding with event correlations, which interoperates symbolic rules with popular deep neural networks.",
"We build a new dataset OntoEvent for ED.",
"Extensive experimental results demonstrate that our model can achieve better performance on the overall, few-shot, and zero-shot setting.",
"Traditional approaches to ED are mostly based on neural networks (Chen et al., 2015; Nguyen et al., 2016; Liu et al., 2018; Wang et al., 2019; Yan et al., 2019; Cui et al., 2020; Shen et al., 2020; Lou et al., 2021), and ignore correlation knowledge of event types, especially in low-resource scenarios.",
"Most previous low-resource ED methods (Peng et al., 2016) have been based on supervised learning .",
"However, supervised-based methods are too dependent on data, and fail to be applied to new types without additional annotation efforts.",
"Another popular methods for low-resource ED are based on meta learning .",
"Deng et al. (2020); Lai et al. (2020); Shen et al. (2021) reformulate ED as a few-shot learning problem to extend ED with limited labeled samples to new event types, and propose to resolve few-shot ED with meta learning.",
"Besides, knowledge enhancement and transfer learning are applied to tackle low-resource ED problems.",
"Tong et al. (2020) leverage open-domain trigger knowledge to address long-tail issues in ED.",
"Liu et al. (2020); Du and Cardie (2020) propose to handle few-shot and zero-shot ED tasks by casting it as a machine reading comprehension problem.",
"Huang et al. (2018) propose to tackle zero-shot ED problem by mapping each event mention to a specific type in a target event ontology.",
"Note that Huang et al. (2018) establish the event ontology merely with intra-structure of events, while we extend it with inter-structure of event correlations.",
"Though these methods are suitable for low-resource scenarios, they mostly ignore implicit correlation among event types and lack reasoning ability.",
"In order to utilize correlation knowledge among event types, Li et al. (2020) propose a new event graph schema, where two event types are connected through multiple paths involving entities.",
"However, it requires various annotations of entities and entity-entity relations, which is complicated and demanding.",
"Different from Li et al. (2020), we propose to revisit the ED task as an ontology learning process, inspired by relation extraction (RE) tasks based on ontology and logic-based learning.",
"Lima et al. (2018, 2019) present a logic-based relational learning approach to RE that uses inductive logic programming for generating information extraction (IE) models in the form of symbolic rules, demonstrating that ontology-based IE approaches are advantageous in capturing correlation among classes, and succeed in symbolic reasoning.",
"We revisit the event detection task as an iterative process of event ontology population.",
"Given an event ontology O with an event type set E = { e i | i [1 , N e ] } , and corpus T = { X i | i [1 , K ] } that contains K instances, the goal of event ontology population is to establish proper linkages between event types and instances.",
"Specifically, each instance X i in T is denoted as a token sequence X i = { x ji | j [1 , L ] } with maximum L tokens, where the event trigger x ti are annotated.",
"We expect to predict the index t ( 1 t L ) and the event label e i for each instance respectively.",
"Besides, we utilize a multi-faceted event-event relation set R = RH (cid:116) RT (cid:116) RC for event ontology population and learning.",
"Thereinto, RH = { SUBSUPER , SUPERSUB , COSUPER 1 } denotes a set of relation labels defined in the subevent relation extraction task (Wang et al., 2020a; Yao et al., 2020).",
"RT = { BEFORE , AFTER , EQUAL 2 } denotes a set of temporal relations (Han et al., 2020).",
"RC = { CAUSE , CAUSEDBY } denotes a set of causal relations (Ning et al., 2018).",
"In this paper, we propose a general framework called OntoED with three modules: (1) Event Detection (Ontology Population), (2) Event Ontology Learning, and (3) Event Correlation Inference.",
"Figure 2 shows the key idea of the three modules.",
"Event Detection aims at identifying the event trigger x ti and type e i for each input tokens X i , and then identify relations among event instances.",
"The average instance embedding of each type is calculated as the primitive event prototype.",
"Event Ontology Learning aims to obtain event ontology embedding with the correlation of event prototypes, based on the relations among event types derived from instances.",
"Event Correlation Inference seeks to infer new event correlations based on existing event-event relations, so as to obtain a solid event ontology.",
"The detailed architecture of OntoED with running examples is illustrated in Figure",
"3. 3.3 Event Detection (Ontology Population) The input of ED is an initial event ontology with event types E and coarse corpus T .",
"Instance Encoder .",
"Given a token sequence X i = { x 1 i , , x Li } with trigger x ti , we use a pre-trained BERT (Devlin et al., 2019) to get a contextual representation X ti for x ti , and use the token embedding of [CLS] as the contextual representation X i for X i .",
"Note that the instance encoder is pluggable, and can be replaced as other models followed by (Deng et al., 2020; Cui et al., 2020).",
"Class Encoder .",
"We then represent event types as prototypes (Snell et al., 2017), as it is proven to be robust for low-resource ED (Deng et al., 2020).",
"Initially, event types have no correlation with others, thus we require to compute the prototype P k for e k E by averaging its instance embeddings: P k = 1 N k (cid:88) N k i =1 X i (1) where N k is the instance number of e k .",
"Afterward, event prototypes will be induced from the module of event correlation inference, as shown in Figure",
"3. Event Detector .",
"Given embeddings of a token sequence, we treat each token as an event trigger candidate and then compute probability of the corresponding event type for event trigger candidate x ti , denoted by: P ( y = e k ) = exp ( (cid:107) X ti P k (cid:107) ) (cid:80) N e j =1 exp ( (cid:107) X ti P j (cid:107) ) (2) where (cid:107) (cid:107) denotes Euclidean distance, and N e = |E| denotes the number of event types.",
"Instance Relation Extractor .",
"For each event instance pair ( X i , X j ) , we adopt a comprehensive way to model embedding interactions (Zhou et al., 2020), denoted by X pij = [ X i , X j , X i (cid:12) X j , X i X j ] , where [ , ] denotes a vector concatenation, and (cid:12) is the element-wise Hadamard product.",
"We then calculate the probability P ( y = r k ) of relation r k R between ( X i , X j ) by softmax.",
"Generally, we adopt cross entropy as the loss function for instance relation extraction, denoted by: LRE = (cid:88) N r k =1 y log P ( y = r k ) (4) where y is the ground-truth for ( X i , X j ) , and N r = |R| denotes the number of event-event relations.",
"Ontology Completion .",
"We complete event ontology O with both intra and inter structure of events.",
"We normatively link event instances T to event types E , and establish correlations among event types based on linkages among event instances.",
"Instance-to-class Linking .",
"Given a sentence S i (formalized as a token sequence X i ) with a trigger x ti of an event instance, we link these information to its corresponding event type e i with normative triples: ( S i , triggerIs, x ti ) and ( S i , instanceOf, e i ).",
"Class-to-class Linking .",
"Given an event instance pair ( X i , X j ) with a relation r , we upgrade the instance correlation to corresponding event types, denoted by ( e i , r, e j ) .",
"Besides, we link each event subtype to its corresponding supertype 3 with a SUBSUPER relation (SUPERSUB in reverse), and we link each event subtype pair having the same supertype with a COSUPER relation.",
"Ontology Embedding .",
"We represent the event ontology considering both instances and correlations for each event type.",
"Specifically, given a triple (cid:96) = ( e h , r, e t ) O , we propagate the prototype P h of head event type e h to prototype P t of tail event type e t with a relation transformation matrix M r R d d .",
"We select a matrix to embed r as it shows great robustness to model relations in low-resource senarios (Zhang et al., 2019).",
"We then aggregate propagation from all head event types by P t = (cid:88) ( e ih ,r i ,e t ) O (cid:96) P ih M r i (6) where O (cid:96) is all one-hop neighbor triples of e t in O .",
"The prototype P t of e t in (cid:96) after propagation is a weighted average of P t and P t with weight [0 , 1] , denoted by: P t = P t + (1 ) P t (7) 3 The supertypes and its corresponding subtypes in this paper are pre-defined and will be introduced in appendix.",
"We calculate the possibility that r is the relation between e h and e t with a truth value for ( e h , r, e t ) : ( e h , r, e t ) = sim ( P h M r , P t ) = ( P (cid:62) h M r P t ) , where is sigmoid function, and the similarity between P h M r and P t is evaluated via dot product.",
"Overall, the loss fuction for event ontology learning is defined by: LOL = (cid:88) ( e h ,r,e t ) O y log ( e h , r, e t ) (8) and y denotes the ground-truth label for ( e h , r, e t ) .",
"Given the event ontology with correlations among event types, we infer new event correlations based on existing ones.",
"To be specific, we utilize the grounding g to infer new event correlation triples, which can be generalized as the following form: ( e Ih , r I , e It ) ( e 1 h , r 1 , e 1 t ) , , ( e nh , r n , e nt ) (9) where the right side event triples ( e kh , r k , e kt ) O with k [1 , n ] have already existed in O and ( e Ih , r I , e It ) / O is new inferred triples to be added.",
"To compute the truth value of the grounding g , we select three object properties ( OP ) of relations defined in OWL2 4 Web Ontology Language: subOP , inverseOP , and transitiveOP , and then learn matrics of relations from linear map assumption (Zhang et al., 2019), presented in Table",
"1. Wang et al. (2020a); Ning et al. (2018) have defined some conjunctive constraints of relations between the event pair, we translate them into object property axioms, shown in Table",
"2. Object Property Axioms Instances of Relation / Relation Pair subOP ( r 1 ,r 2 ) (CAUSE , BEFORE ) inverseOP ( r 1 ,r 2 ) (SUBSUPER , SUPERSUB ), (BEFORE , AFTER ), (CAUSE , CAUSE dBy) transitiveOP ( r ) SUBSUPER , SUPERSUB , COSUPER , BEFORE , AFTER , EQUAL Table 2: Groundings of three object properties in O .",
"Assuming that M r and M r denotes the relation set on left and right of Eq (9) respectively, they are 4 https://www.w3.org/TR/owl2-profiles/ matrices either from a single matrix or a product of two matrices.",
"As relation constraints are derived from ideal linear map assumption (the 3rd column in Table 1), M r and M r are usually unequal but similar during training.",
"Thus, the normalized truth value F p of g can be calculated based on relation constraints (the 4th column in Table 1): F (cid:48) p = (cid:107) M r M r (cid:107) F , F p = F maxp F (cid:48) p F maxp F minp where (cid:107) (cid:107) F denotes Frobenius norm, and subscript p respectively denotes one of the three object properties.",
"F maxp and F minp is a the maximum and minimum Frobenius norm score.",
"F p [0 , 1] is the truth value for the grounding g and the higher F p means the more confident that g is valid.",
"LER = S (cid:88) i G ( S ) log F ip V (cid:88) j G ( V ) log F jp T (cid:88) k G ( T ) log F kp",
"G ( ) denotes all groundings w.r.t. subOP ( S ), inverseOP ( V ), and transitiveOP ( T ).",
"S , V , and T are hyperparameters for the loss of three object properties respectively.",
"L = LOP + LOL + LER (11)",
"The experiments seek to: (1) demonstrate that OntoED with ontology embedding can benefit both standard and low-resource ED, and (2) assess the effectiveness of different modules in OntoED and provide error analysis.",
"To this end, we verify the effectiveness of OntoED in three types of evaluation: (1) Overall Evaluation , (2) Few-shot Evaluation , and (3) Zero-shot Evaluation .",
"As none of present datasets for ED is annotated with relations among events, we propose a new ED dataset namely OntoEvent with event correlations.",
"It contains 13 supertypes with 100 subtypes, derived from 4,115 documents with 60,546 event instances.",
"The details of OntoEvent are introduced in appendix.",
"We show the main statistics of OntoEvent and compare them with some existing widely-used ED datasets in Table",
"3. Dataset #Doc #Ins #SuperT #SubT #E-E Rel ACE 2005 599 4,090 8 33 None TAC KBP 2017 167 4,839 8 18 None FewEvent -70,852 19 100 None MAVEN 4,480 111,611 21 168 None OntoEvent 4,115 60,546 13 100 3,804 Table 3: Statistics of OntoEvent compared with existing widely-used ED datasets.",
"OntoEvent is established based on two newly proposed datasets for ED: MAVEN (Wang et al., 2020b) and FewEvent (Deng et al., 2020).",
"They are constructed from Wikipedia documents or based on existing event datasets, such as ACE-2005 5 and TAC-KBP-2017 6 .",
"In terms of event-event relation annotation in OntoEvent , we jointly use two models: TCR (Ning et al., 2018) is applied to extract temporal and causal relations, and JCL (Wang et al., 2020a) is used for extract hierarchical relations.",
"The code of OntoED and OntoEvent dataset can be obtained from Github 7 .",
"For overall evaluation , we adopt CNN-based model DMCNN (Chen et al., 2015), RNN-based model JRNN (Nguyen et al., 2016), and GCN-based model JMEE (Liu et al., 2018).",
"Besides, we adopt BERT-based model AD-DMBERT (Wang et al., 2019) with adversarial imitation learning.",
"We also adopt graph-based models OneIE (Lin et al., 2020) and PathLM (Li et al., 2020) which generate graphs from event instances for ED.",
"For few-shot evaluation and zero-shot evaluation , we adopt some metric-based models for few-shot ED, such as MatchNet (Lai et al., 2020), ProtoNet (Snell et al., 2017) and DMBPN (Deng et al., 2020).",
"We also adopt knowledge-enhanced model EKD (Tong et al., 2020) and BERT-based models QAEE (Du and Cardie, 2020) as well as RCEE (Liu et al., 5 http://projects.ldc.upenn.edu/ace/ 6 https://tac.nist.gov/2017/KBP/Event/index.html 7 https://github.com/231sm/Reasoning In EE 2020) based on machine reading comprehension.",
"With regard to settings of the training process, SGD (Ketkar, 2014) optimizer is used, with 30,000 iterations of training and 2,000 iterations of testing.",
"The dimension of token embedding is 50, and the maximum length of a token sequence is 128.",
"In OntoED, a dropout rate of 0.2 is used to avoid over-fitting, and the learning rate is 1 10 3 .",
"The hyperparameters of , , , and are set to 0.5, 0.5, 1.5 and 1 respectively.",
"S , V , and T are set to 0.5, 0.5 and 1 respectively.",
"As the dataset is unbalanced, we evaluate the performance of ED with macro precision (P), Recall (R) and adopt micro F1 Score (F) following (Chen et al., 2015).",
"Detailed performance can be found in Github 7 .",
"Setting .",
"We follow the similar evaluation protocol of standard ED models, e.g. , DMCNN (Chen et al., 2015).",
"Event instances are split into training, validating, and testing subset with ratio of 0.8, 0.1 and 0.1 respectively.",
"Note that there are no new event types in testing set which are not seen in training.",
"As seen from Table 4, OntoED achieves larger gains compared to conventional baselines, e.g. , DMCNN, JRNN and JMEE.",
"Moreover, OntoED still generally excel BERT-based AD-DMBERT.",
"This implies the effectiveness of ED framework with ontology embedding, which can leverage and propagate correlations among event types, so that reduce the dependence on data to some extent.",
"Especially, OntoED also outperform graph-based models, i.e. , OneIE and PathLM.",
"The possible reason is that although they both convert sentences into instance graphs, and PathLM even connects event types with multiple entities, the event correlations are still implicit and hard to capture.",
"OntoED can explicitly utilize event correlations and directly propagate information among event types.",
"Setting .",
"We follow the similar evaluation protocol and metrics of data-scarce ED models, i.e. , RCEE (Liu et al., 2020), which train models with partial data.",
"We randomly sample nearly 80% event types for training, 10% for validating, and 10% for testing.",
"Differently from overall evaluation, the event types in testing set are not exsiting in training set.",
"As seen from Table 5, we demonstrate F1 score results in extremely low-resource scenarios (train-ing with less than 20% data, with the similar setting to Liu et al. (2020)).",
"Obviously, OntoED behaves tremendous advantages in low-resource ED.",
"For example, OntoED obtains 44.98% F1 with 1% data, in comparison to 7.09% in MatchNet and 8.18% in ProtoNet.",
"We also illustrate accuracy results with different ratios of training data followed by Liu et al. (2020), show in Figure",
"4. As seen, OntoED demonstrates superior performance with less data dependence than baselines.",
"Especially comparing with DMBPN and EKD, which require 60% training data to closely achieve the best results, while OntoED only uses 20%.",
"Besides, we find that the performance on DMBPN increases first and then slightly decreases as the ratio of training data increases, the possible reason may lie in data noise and redundancy.",
"In low-resource scenarios, more data are not always better.",
"Particularly for some merely data-driven ED models, such as DMBPN, may obtain a worse effect instead if added data are dirty or duplicated.",
"But for OntoED, as it utilizes correlation knowledge in the event ontology and has less dependence on event instances, making it more robust to noisy and redundant data.",
"Furthermore, OntoED also outperforms than BERT-based model with regarding each event instance as a question, i.e. , QAEE and RCEE.",
"This implies that event ontology learning with event type knowledge may resolve low-resource ED more advantageously than training merely with event instances.",
"Setting .",
"We follow the similar evaluation protocol and metrics of zero-shot ED models, i.e. , ZSEE (Huang et al., 2018), and comply with the same dataset segmentation policy as few-shot evaluation, thus there are also new unseen event types for testing.",
"Differently, ED data are completely banned for training, meaning that we train models only with event types other than instances.",
"Table 6 demonstrates the results regarding zero-shot ED.",
"We can see that OntoED achieves best precision and F1 score as well as comparable recall results in comparison to baselines.",
"This illustrates the effectiveness of OntoED handling new unseen event types without introducing outsourcing data.",
"Traditional models, such as EKD and RCEE, require to adopt other datasets, e.g. , WordNet (Miller et al., 1990) (where words are grouped and interlinked with semantic relations) and FrameNet (Baker, 2014) (where frames are treated as meta event types) to increase the persuasiveness of results.",
"In contrast, OntoED naturally models the structure of event types with an event ontology, thus even for a new unseen event type without instance data, we can also obtain its representation through the event-event correlation.",
"Moreover, OntoED is also beneficial to resolve zero-shot ED than ZSEE.",
"This may due to OntoED modeling with both intra and inter structures of events while ZSEE merely considering the intra-structure.",
"To assess the effect of event ontology learning and correlation inference, we remove the two modules in OntoED, and evaluate F1 score shown in Figure",
"5. From the results, we observe that OntoED outperforms the two baselines in all evaluation settings, indicating that event ontology learning and correlation inference facilitate ED, as they utilize knowledge among event types and has less dependence on instance data.",
"Furthermore, in terms of performance degradation compared to OntoED, F1 score of OntoED merely without event correlation inference ( e.g. , 10.9% ) drops more seriously than that without event ontology learning ( e.g. , 6.6% ), and the phenomenon is more obvious in few-shot and zero-shot evaluation ( e.g. , 10.9% v.s. 15.9% and 28.1% ).",
"This illustrates that event correlation inference is more necessary in OntoED, as it establishes more correlations among event types, thereby knowledge can be propagated more adequately, especially from data-rich to data-poor events.",
"We further conduct error analysis and provide some representative examples.",
"(1) One typical error relates to similar event-event structures in the event ontology.",
"As OntoED considers event correlations, 2veraOO EvaOuation FeZ-6hot EvaOuation Zero-6hot EvaOuation DLIIerent settLngs Ior evDOuDtLon 0 10 20 30 40 50 60 70 ) 1 SF o r e ( % ) I o r ( D Onto(D Onto(D w/o &orreODtLon InIerenFe Onto(D w/o &orreODtLon InIerenFe & OntoOogy /eDrnLng 10.9% 6.6% 15.9% 28.1% 8.1% 13.9% Figure 5: Effect of different modules in OntoED.",
"event types with similar neighbor triples can be indistinguishable.",
"For example, Robbery and Kidnapping have the same supertype Crime , and they both have the neighbor triples of ( , CAUSE , Arrest ).",
"(2) The second error relates to wrong instance relations.",
"As the instance relation extraction directly influence the establishment of event correlations, wrong instance relations will cause error propagation.",
"(3) The third error relates to the same event mention for different event types.",
"For example, Of the 126 people aboard, 47 died and 74 sustained serious injuries. ' both mentions Die and Injure .",
"This paper proposes a novel event detection framework with ontology embedding called OntoED .",
"We revisit the ED task by linking each event instance to a specific type in a target event ontology.",
"To facilitate the linkage, we enrich the event ontology with event-event relations, such as temporal, causal and hierarchical correlation, and induce more event correlations based on existing ones.",
"The key insight is that event ontology can help to reduce model dependence on instance data, especially in low-resource scenarios.",
"As data-rich event types can propagate correlation knowledge to data-poor ones, and new event types can establish linkages to the event ontology.",
"We demonstrate the effectiveness of OntoED in three settings: overall, few-shot as well as zero-shot, and experiments show that OntoED excels previous methods with great robustness.",
"In the future, we intend to extend our work in several aspects.",
"First, we would improve the event ontology and consider more event correlations.",
"Second, we would explore if low-resource ED can also boost to identify event correlation.",
"Third, we would develop more neuro-symbolic methods for ED.",
"We want to express gratitude to the anonymous reviewers for their hard work and kind comments.",
"This work is funded by National Key R&D Program of China (Funding No.2018YFB1402800) and NSFC91846204.",
"A broad goal of event detection is to extract structured knowledge from unstructured texts to facilitate knowledge acquisition.",
"For example, it is valuable in the medical domain and provides social benefits to analyze dispensatory details as well as electronic health records.",
"Furthermore, a solid ED system can also be applied to many society issues, such as anti-terrorist and public opinion analysis.",
"In this paper, we present a new dataset OntoEvent for ED with event-event correlations.",
"The event data are all collected from existing datasets ( i.e. , ACE 2005) or open source databases ( e.g. , Wikipedia), and the annotation are generated from existing models with citations.",
"In experiments, we detailedly describe how to evaluate the newly-proposed OntoEvent and provide specific analysis.",
"The code and dataset are both available.",
"Our approach to ED can leverage only a few event corpus to establish the linkage between event types and event instances w.r.t. event correlations.",
"In addition, this work is also a brand-new attempt to combine information extraction and symbolic reasoning, based on ontology embedding.",
"Our intention is to develop an ontology-based ED system for the NLP community, and wish our innovation can become a small step in this direction."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"method",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"AI systems embodied in the physical world face a fundamental challenge of partial observability; operating with only a limited view and knowledge of the environment.",
"This creates challenges when AI systems try to reason about language and its relationship with the environment: objects referred to through language (e.g. giving many instructions) are not immediately visible .",
"Actions by the AI system may be required to bring these objects in view.",
"A good benchmark to study this challenge is Dynamic Referring Expression Recognition ( dRER ) task where the goal is to find a target location by dynamically adjusting the field of view (FoV) in a partially observed 360 scenes.",
"In this paper, we introduce HOLM, H allucinating O bjects with L anguage M odels, to address the challenge of partial observability.",
"HOLM uses large pre-trained language models (LMs) to infer object hallucinations for the unobserved part of the environment.",
"Our core intuition is that if a pair of objects coappear in an environment frequently, our usage of language should reflect this fact about the world.",
"Based on this intuition, we prompt language models to extract knowledge about object affinities which gives us a proxy for spatial relationships of objects.",
"Our experiments show that HOLM performs better than the state-of-the-art approaches on two datasets for dRER ; allowing to study generalization for both indoor and outdoor settings.",
"One of the fundamental challenges in building AI systems physically present in the world is addressing the issue of partial observability, the phenomenon where the entire state of the environment is not known or available to the system.",
"People cope with partial observability by reasoning about what is not immediately visible (see example in Figure 1).",
"People combine their general knowledge about the world and adapt their knowledge Figure 1: Illustration of our main contribution: Hallucinating Objects .",
"to specific contexts (Torralba et al., 2006).",
"General knowledge about kitchens can help to know approximately where to look for pans or utensils in a kitchen that has never been seen before.",
"How can an AI system build general knowledge about objects and their environment to help with a similar task?",
"Even more interestingly, can we gather this information from language, using readily available resources such as language models trained on a large collection of unlabeled text?",
"In this paper, we introduce a method called HOLM, H allucinating O bjects with L anguage M odels, for reasoning about the unobserved parts of the environment.",
"Inspired by the recent successes of large pre-trained language models (LM) extracting knowledge about the real world, we propose a methodology based on spatial prompts to extract knowledge from language models about object.",
"HOLM extracts spatial knowledge about objects in the form of affinity scores, i.e., how often a pair of objects are observed together.",
"This knowledge of objects are combined with observed spatial 5440 Figure 2: Illustration of the dRER task with an example of language instruction and its recognition in four steps.",
"The agent adjusts its FoV by looking at different directions and navigate on the graph in the spherical view.",
"Note that objects mentioned in bold in the instruction are not visible at all until timestep 4.",
"Thus, the agent needs to reason about possible locations of the mentioned object using its partial view of the scene.",
"layout to hallucinate what might appear in the unobserved part of the scene.",
"We evaluate our HOLM approach on Dynamic Referring Expression Recognition ( dRER ) task where the goal is to find a target location by dynamically adjusting the field of view (FoV) in partially observed 360 scenes.",
"We examine how HOLM compares with the state-of-the-art approaches on two publicly available datasets to study generalization for both indoor and outdoor settings.",
"dRER task is designed to localize a target location in a dynamically observed 360 scene given natural language instruction.",
"Unlike conventional referring expression recognition, which refers to an object in a static visual input, in dRER , only a small part of the scene is visible in a field of view.",
"However, the system can adjust the field of view to find the described point in the scene.",
"In Figure 2, we illustrate the dRER task and motivate our method.",
"On top, natural language instruction is given.",
"In the middle, the spherical view of the scene is illustrated the agent explores only some portion of a 360 scene.",
"FoVs on the sphere represented as square nodes form a graph.",
"By navigating to a neighboring node , the agent adjusts its FoV and observes a different view of the scene.",
"Note that objects mentioned in the instruction oven and range hood are not visible until the fourth timestep.",
"However, we can reason about where to look using visible objects such as the air vent or the fridge.",
"Thus, to perform well on this task, it is essential to reason about where objects might appear.",
"(cid:104)S , A , P s , r (cid:105) where S is the visual state space, A is the discrete action space 1 , P s is the unknown environment probability distribution from which the next state is drawn, and r R is the reward function.",
"For a time step t , the agent observes an image s t S , and performs and action a t A .",
"As a result of this action, the environment generates a new observation s t +1 P s ( | s t , a t ) as the next state.",
"This interaction continues sequentially and ends when the agent performs a special STOP action or a pre-defined maximum episode length is reached.",
"The resolution process is successful if the agent ends the episode at the target location.",
"In dRER , instructions are represented as N sequence of sentences represented as x = { x i } Ni =1 .",
"Each instruction sentence x i consists of a sequence of L i words, x i = [ x i, 1 , x i, 2 , ..., x i,L i , ] .",
"The training dataset DE = {X , T } consists of M pairs of the instruction sequence x X and its corresponding expert trajectory T .",
"The agent learns to navigate by learning a policy via maximum like-1 For computational efficiency, we picked discrete action space.",
"max L ( X , T ) , where L ( X , T ) = log ( T |X ) L ( X , T ) = 1 MM (cid:88) k =1 log ( k | x k ) (1)",
"In dRER , the system observes the current FoV and does not see the resulting FoV before taking any actions.",
"Thus, it is essential to reason what might appear in a future observation using what is currently visible to the system.",
"Our core intuition is that objects visible in the current FoV and their locations in the FoV give us a clue about what might appear if a particular action is taken.",
"Here, we propose an approach for reasoning about future observations using what is visible and some background knowledge of objects.",
"Let us go through the illustration in Figure 3 to explain our HOLM method.",
"In the top panel, we feed spatial prompts to pre-trained language models to extract knowledge about objects in the form of affinity scores.",
"In the bottom panel, we see the input of the system where there are natural language instructions, an FoV of the scene, and detected objects.",
"Next, we calculate which objects are relevant to each action.",
"For instance, couch detections are on the right side; 5442 thus, they are relevant to the right action.",
"Similarly, the fridge is relevant for the left action because it is on the left side.",
"Then on the third step, using the affinity score of a pair of objects, we predict what might appear after performing an action.",
"For right action, our model hallucinates a tv and tv-stand might appear because the couch and tv have a high affinity score according to the LM.",
"Language models process a large amount of text to learn regularities in natural language.",
"They do so by predicting the next word or masked token given a sequence of words.",
"Our intuition is that objects that frequently appear in an environment close to each other will have similar language usage.",
"Thus, we hypothesize that language models' capability of learning affinity scores of words in language also reflects objects' spatial properties.",
"In Figure 3's top panel, we illustrate how we extract this capability.",
"We query language models trained on a large amount of free-form text with spatial relationship prompts.",
"These spatial prompts aim to capture the usage of words when they appear together in the world.",
"An example of these prompt templates is Near the o 1 , there is ___ where o 1 O is an object label where O is a set of object labels.",
"If object o 1 co-occurs with o 2 with high frequency, the language model would provide a high probability for the phrase Near the o 1 , there is o 2 .",
"Using all pairs in O and K 2 spatial templates, we generate queries q .",
"We then calculate affinity scores C o 1 ,o 2 , i.e., observing o 2 when o 1 is present as follows: C o 1 ,o 2 = K (cid:88) i =1 p LM ( o 2 | q i ) (2) Where p LM ( o 2 | q ) is a language model that calculates the probability of observing a token o 2 given a prefix sequence of tokens q .",
"Our main idea behind HOLM is to reason about what might be observed in a future observation by combining (1) which objects are visible in the current observation and (2) what we know about the spatial properties of those objects.",
"We explain the details of our approach in this section.",
"Let p a R | O | be the vector of probabilities of observing an object among a set of all objects O 2 Please see Appendix A.1 for the full list of spatial prompt templates.",
"Where p FoV R | O | is a vector of confidence values for objects detected in the current FoV.",
"We use an off-the-shelf object detection system (Anderson et al., 2018a) to calculate p FoV .",
"C is the affinity scores of size | O | | O | .",
"C represents how often a pair of object appear in a spatial relationship and represents the background knowledge of objects.",
"1 a { 0 , 1 } | O | is a binary vector representing spatially related objects for a direction a .",
"This vector is calculated with an indicator function to determine whether an object is spatially related to action a .",
"We calculate the indicator function as follows.",
"First, we separate the FoV into 4 imaginary regions called quadrants where each quadrant determines how a region in observed FoV is spatially relevant for canonical directions (i.e., up, down, left, right).",
"In other words, quadrants are hot-spots for each direction i.e., the left side of the image is more relevant to the right side of the image if we are interested in what might appear on the left.",
"For 8 directions (left, right, down, up, down-left, downright, up-left, up-right), we calculate how much each objects' bounding box overlaps with these quadrants.",
"If intersection-over-union is above a fixed threshold we keep this object for the hallucination process.",
"We designed our experiments to study and evaluate our proposed HOLM approach under five different research questions.",
"RQ1: What is the performance of HOLM when compared to other state-of-the-art approaches?",
"RQ2: what is the impact of LM as a source of knowledge for HOLM when compared to other more conventional sources (e.g., images)?",
"RQ3: How essential are external sources of data for learning knowledge about objects compared to in domain data?",
"RQ4: How accurate is HOLM for predicting objects in future observations?",
"RQ5: How do annotation-free language-based knowledge sources i.e., LMs and word embeddings compare for HOLM?",
"The following section explains the details of experimental setup.",
"Our results are presented and discussed in Section 4.2.",
"To study the research questions previously mentioned, we used two publicly available datasets and state-of-the-art methods as baselines to compare with.",
"Datasets.",
"We selected the following two datasets to see if our method generalizes to both indoor and outdoor settings.",
"The Refer360 dataset (Cirik et al., 2020) consists of 17K natural language instructions and ground-truth trajectory pairs for localizing a target point in 360 scenes.",
"The ground-truth trajectories are annotated by human annotators in the form of successive FoVs in partially observed 360 scenes.",
"The dataset uses a subset of the SUN360 dataset (Xiao et al., 2012) as the source of scenes and these scenes are from both indoor and two outdoor locations.",
"Touchdown (Chen et al., 2018) consists of 9K natural language instruction and ground-truth location pairs for 360 scenes on Google Streetview.",
"Unlike the Refer360 dataset, Touchdown does not have expert trajectories only expert predictions for the target location are provided.",
"Thus, we generated ground-truth trajectories by calculating shortest path trajectories between a randomly selected starting point 3 and the target location.",
"The Self Monitoring Navigation Agent (SMNA) (Ma et al., 2019) model is trained with a co-grounding module where both visual and textual input is attended at the same time.",
"The agent also measures its progress with a progress monitor module.",
"FAST (Ke et al., 2019) stands for Frontier Aware Search with backTracking.",
"The FAST model learns to score partial trajectories of an agent for efficiently backtracking to a previous location after a mistake.",
"Speaker-Follower (Fried et al., 2018) uses a sequence-to-sequence speaker model to rerank a follower model's candidate trajectories.",
"This pragmatic reasoning model has been shown to improve navigation agents' performance significantly.",
"3 Following (Cirik et al., 2020), we set the initial random point to be a fix heading and random yaw.",
"LingUNet (Misra et al., 2018) is an image-to-image encoder-decoder model for learning image-to-image mappings conditioned on language.",
"We should emphasize that, unlike the previous methods, LingUNet is not a navigation model; instead, it predicts regions over an image.",
"RANDOM agent randomly picks an action.",
"STOP agent predicts the starting FoV as the target FoV.",
"For a fair comparison, the same model was used as the basis for all the compared models.",
"For our proposed approach HOLM is used to enhance the SMNA baseline by hallucinating objects for unseen regions.",
"After getting object hallucinations for each neighboring FoVs, we use the sum of word embeddings for object labels as the input representation for the neighboring FoV.",
"In the oracle Next FoV scenario, we use ground-truth FoVs to do the same process.",
"For a fair comparison, we use SMNA as the base agent for learning to recover from a mistake during navigation process with FAST and as the follower model for pragmatic reasoning with Speaker-Follower.",
"Evaluation Metrics.",
"Our main evaluation metric for methods is FoV accuracy: the percentage of the time the target location is visible in the final FoV.",
"The FoV accuracy sets an upper bound on the localization accuracy for predicting the pixel location of the target point, i.e., if the target is not visible, it is impossible to predict the exact location.",
"Thus, we focus on this metric to compare systems.",
"Implementation.",
"All models are trained for 100K iterations.",
"We use Adam (Kingma and Ba, 2015) for optimization with a learning rate 0.0001 and weight decay parameter 0.0005 (Krogh and Hertz, 1992).",
"For each model, we perform a grid-search over their hyperparameters (e.g., number of hidden units, number of layers, dropout rate) and pick the best performing model based on validation score 4 .",
"All models are implemented using PyTorch (Paszke et al., 2019) and publicly available 5 .",
"To speed up the training procedure, we used fixed a grid of FoVs for all 360 images where each FoV is connected to its neighboring FoVs.",
"This grid forms the navigation graph depicted in 4 For Refer360 we use validation unseen split.",
"Touchdown does not have seen-unseen distinction.",
"5 https://github.com/volkancirik/HOLM 5444 Method Oracle Refer360 Touchdown Stop Agent 14.1 0.0 Random Agent 12.1 6.8 SMNA (Ma et al., 2019) 27.1 45.9 + HOLM (this work) 32.2 49.8 SMNA (Ma et al., 2019) Next FoV 33.5 50.2 LingUNet* (Chen et al., 2018) Full Panorama 21.4 47.2 Table 1: FoV accuracy results for Refer360 and Touchdown with no hallucination baseline, best performing models, and Next FoV oracle model, i.e. the ability to look ahead for neighbor FoVs, and observing full 360 scenes.",
"the Figure 2.",
"We use 30 of separation between successive FoVs which provides enough overlap to reveal relevant information about successive FoVs yet distant enough so that the model needs to reason about future steps.",
"We then pre-calculated the rectilinear projection of each of the FoVs on the grid for all scenes.",
"In this section we present and discuss experimental results and analyses.",
"(RQ1)",
"HOLM Improves performance.",
"Our main results are presented in Table 1.",
"In the first row block, we see that simple non-learning baselines fail to perform on the dRER .",
"In the second row block, we compare our method with the baseline where the agent does not have any visual input from the next FoVs.",
"HOLM improves the baseline by hallucinating objects for the next FoVs.",
"In the third row block, we provide results for oracle scenarios.",
"For SMNA, we feed ground-truth FoV as the input of the system.",
"This result sets the upper bound on HOLM, because it cannot achieve better hallucination than the ground-truth FoVs.",
"However, HOLM achieves pretty close to this upper bound and show that it can provide useful predictions for this task.",
"For LingUNet, we feed the full 360 scenes as the visual input.",
"Since LingUNet is not a navigation agent i.e. predicts the target location using full 360 scenes, we calculate FoV accuracy by drawing an FoV around the prediction, which explains *'.",
"In Table 2, we compare HOLM with FAST and Speaker-Follower methods, both of which use beam search.",
"During the beam search, these methods use multiple trajectories while deciding on a trajectory.",
"However, this is not plausible in a real-world scenario, i.e. a robot would not gen-Method Beam Search Refer360 Touchdown Baseline SMNA (Ma et al., 2019) 27.1 45.9 + HOLM (this work) +5.1 +3.9 + FAST (Ke et al., 2019) (cid:33) -6.4 +4.7 + Speaker-Follower (Fried et al., 2018) (cid:33) -4.6 -11.1 Table 2: FoV accuracy results for Refer360 and Touchdown for methods using beam search or single candidate trajectory.",
"erate many trajectories before performing action.",
"HOLM, on the other hand completes the task on a single trajectory while predicting possible future states.",
"FAST improves SMNA for Touchdown but not for Refer360 , which might be due to the richness of scenes in Refer360 whereas in Touchdown , the scenes are always in the same domain.",
"Speaker-Model's decreases the score for SMNA possibly due to the Speaker models' poor performance where the BLEU score is around 6.",
"HOLM consistently improves for both datasets and does not perform any expensive look-ahead operations such as beam search.",
"affinity scores compared to other sources.",
"In Table 3, we compare several baseline methods for calculating the affinity scores.",
"First, we use uniform (i.e., each object pair has the same affinity score) and identity (i.e., object x can only have affinity score with itself) baselines.",
"We also study calculating affinity scores using data annotated by humans.",
"First, we use object annotations in VisualGenome (Krishna et al., 2017).",
"VisualGenome provides a large collection of fine-grained annotations for objects and their spatial relationships.",
"Second, ideally we would like to use human annotations for calculating the affinity score.",
"However, this requires annotation of | O | 2 annotations.",
"Instead, as a proxy, we use WordNet (Miller, 1995), a knowledge-base hierarchy annotated by experts.",
"We use NLTK (Bird et al., 2009) to calculate the WordNet similarity to extract the affinity scores between ob-5445 jects.",
"XLM-based HOLM achieves the best results among these baselines.",
"This result shows that without using human annotations, we can extract useful knowledge about objects using pre-trained LMs.",
"information compared to task data.",
"In Table 4, we compare methods that only use task data for object hallucination and HOLM with external sources such as pre-trained LM.",
"For the second row in the table), we use the BUTD model (Anderson et al., 2018a) to annotate training images with object bounding boxes.",
"Using bounding boxes of objects, we calculate affinity scores.",
"For the third row in the table, we design a model that takes FoV and an object type as an input and predicts a direction (i.e., hallucinate where it might appear) as output.",
"We pass the final feature map layer of 152-layer ResNet (He et al., 2016) as input to a 3-layer feed-forward neural network to predict objects that might appear in neighboring FoVs.",
"This model achieves an F1 score of 40.3 for direction prediction.",
"Both of these methods improve over the SMNA baseline but are worse than the pretrained LM.",
"This result indicates that task data may have limitations, and external sources such as a pretrained LM may provide a signal for knowledge about objects.",
"(RQ4)",
"Accuracy of HOLM translates to dRER So far, we measure the performance of HOLM for the downstream dRER task.",
"We can also measure how accurate HOLM is at predicting the presence of an object in neighboring FoVs.",
"We annotate each neighboring ground-truth FoVs with detections from BUTD.",
"If the p ia for object o i O is above 1 | O | , we count that as a prediction of an object in the neighboring FoV after performing action a .",
"In Table 5, we provide precision, recall, and F1 score for the performance of different methods for calculating affinity scores for HOLM.",
"XLM achieves the best performance among the methods we compare.",
"We conclude that the performance for the intrinsic task (i.e., predicting the presence of objects) translates to dRER performance.",
"good sources of general knowledge of objects In Table 6, we compare word embedding methods and different language models.",
"We use cosine similarities between pairs of objects to calculate the affinity scores.",
"For language models, we compare Open AI's GPT3 (Brown et al., 2020) using their online API 6 .",
"We use Transformers Library (Wolf et al., 2020) for RoBERTa (Liu et al., 2019c) and XLM (Conneau and Lample, 2019).",
"All methods consistently improve over the baseline SMNA model, however, we achieve the best performance using XLM.",
"This result indicates that we can extract useful knowledge about objects with methods relying on large amount of unlabeled text.",
"Our work on dRER is closely related to previous studies focusing on Referring Expression Recognition (RER), Vision-and-Language Navigation (VLN), and methods we propose are related to pretraining language models for vision-and-language tasks, model-based reinforcement learning, and co-occcurrence modeling for computer vision.",
"We review these studies in this section.",
"RER is the task of localizing a target object or a point in an image described by a natural language expression.",
"The most of existing datasets 6 https://beta.openai.com/ 5446 poses the task in 2D images with objects as being the target (Kazemzadeh et al., 2014; Yu et al., 2016; Mao et al., 2016; Strub et al., 2017; Liu et al., 2019a; Akula et al., 2020; Chen et al., 2020).",
"Several lines of work are proposed to address RER (Mao et al., 2016; Nagaraja et al., 2016; Yu et al., 2016; Hu et al., 2016; Fukui et al., 2016; Luo and Shakhnarovich, 2017; Liu et al., 2017; Yu et al., 2017; Zhang et al., 2018; Zhuang et al., 2018; Deng et al., 2018; Yu et al., 2018; Cirik et al., 2018; Liu et al., 2019b).",
"In Touchdown (Chen et al., 2018) and Refer360 (Cirik et al., 2020) the target is a point not an object in a 360 image.",
"In the dRER setup, we also use 360 images of Touchdown and Refer360 , but we do not provide the full panoramic view of the scene.",
"Instead, in a more realistic scenario, the agent observes a partial and dynamic view of the scene, i.e. the agent needs to adjust its FoV to find the target location.",
"Closer to our work, in REVERIE (Qi et al., 2020b) an embodied setup is proposed where the agent needs to first navigate to a location where the target object is visible.",
"Similar to Touchdown and Refer360 , at the final position, the full 360 view is visible to the agent.",
"Unlike ours and similar to 2D image-based RER, the target is an object rather than a point in the scene.",
"VLN is a vision-and-language task where an agent in a simulated environment observes a visual input and is given a natural language instruction to navigate to a target location.",
"The earlier work (MacMahon et al., 2006; Shimizu and Haas, 2009; Chen and Mooney, 2011) studies the task with synthetic images or in a very small scale (Vogel and Jurafsky, 2010).",
"Anderson et al. (2018b) proposes Room-to-room (R2R) benchmark and revisit VLN task with a modern look.",
"In R2R, the agent observes panoramic scans of a house (Chang et al., 2017) and needs to carry out the natural language instruction.",
"EnvDrop (Tan et al., 2019) model shows generalization to unseen environments by dropping visual features.",
"PREVALENT (Hao et al., 2020) tackles the data sparsity problem with a pretraining scheme.",
"Hong et al. (2021) show that a pre-trained multi-modal can be enhanced with a memory state for the VLN task by recurrently feeding a contextualized state feature after each time step.",
"dRER also poses a navigation task where locations in physical space in VLN correspond to FoVs in a fixed location.",
"In dRER , a trajectory of the agent corresponds to its resolution process for finding the goal location.",
"Pre-trained models for Vision-and-Language has been recently studied after the huge success of transformer-based models (Vaswani et al., 2017) in NLP (Devlin et al., 2018; Liu et al., 2019c; Conneau and Lample, 2019; Sun et al., 2019b; Poerner et al., 2020; Raffel et al., 2020; Brown et al., 2020).",
"Numerous studies extend these approaches to the multimodal domain (Tan and Bansal, 2019; Lu et al., 2019; Sun et al., 2019a; Su et al., 2020; Li et al., 2020; Qi et al., 2020a; Hu and Singh, 2021).",
"They achieve the-state-of-the-art results in several tasks such as image captioning, text-to-image retrieval, or referring expression recognition.",
"Our work differs from these studies in the sense that the previous approaches use large scaled paired image-text data (Chen et al., 2013; Divvala et al., 2014; Sadeghi et al., 2015; Radford et al., 2021; Jia et al., 2021) to learn efficient representations (Frome et al., 2013; Kottur et al., 2016) for visual and textual modalities whereas we are interested in spatial information learned in unimodal text representations.",
"Language priors for vision were explored in recent studies.",
"Lu et al. (2016) use word embeddings in a language module to learn a representation for a object-predicate-object triplet for visual relationship detection task.",
"Kiela et al. (2019) propose an approach to extend pre-trained transformer-based LMs for multimodal tasks.",
"Similarly, Lu et al. (2021); Tsimpoukelli et al. (2021) show that pre-trained LMs can be finetuned to perform well in few-shot settings for image classification and open-domain Visual Question Answering (Marino et al., 2019).",
"Marino et al. (2021) also show that multimodal transformer architectures capture implicit knowledge for a pair of objects.",
"Our work differs from these studies (1) we use only unimodal models, (2) we do not finetune models we do not update models during training.",
"The most similar work to ours, Scialom et al. (2020) show that pre-trained LMs can perform reasonably well on Visual Question Generating (Yang et al., 2015; Mostafazadeh et al., 2016) out of the box.",
"One difference is that we use object labels rather than object features or the appearance of objects to query the language model; however, they use object features as a visual token to the language model.",
"Prompts we use in our work shares similarities with prompts designed in PIQA (Paranjape et al., 2021), but our work is evaluated in a multimodal setup.",
"In con-5447 trast, PIQA is evaluated for textual commonsense reasoning tasks.",
"Hallucination idea is also related the work on predicting future observations in long horizons (Vil-legas et al., 2019) which has been studied in the context of learning planning (Hafner et al., 2019) and acquiring skills for control problems (Hafner et al., 2020), and efficient policy learning (Ha and Schmidhuber, 2018), and vision-and-language navigation (Koh et al., 2021).",
"All these approaches are interested in longer horizons; however, in our work, we study predicting single-step future observation.",
"More recent work (Hu et al., 2021; Rombach et al., 2021; Rockwell et al., 2021) study view synthesis from a single visual observation.",
"Unlike these approaches, HOLM does not generate pixel-level views rather abstractions of views with object labels.",
"Affinity scores are mainly studied in computer vision tasks in the form of object co-occurrences.",
"Previous studies have shown that object co-occurrences are efficient representations of visual prior for object categorization for object segmentation (Rabinovich et al., 2007; Galleguil-los et al., 2008; Ladicky et al., 2010) and zero shot object-recognition (Mensink et al., 2014), and scene understanding (Wu et al., 2014).",
"Our work differs from these studies: we do not calculate co-occurrence statistics, i.e. we do not count the frequency of times they appear together; instead, we calculate a probability measure using language models.",
"In this paper, we introduced HOLM a model that can extract prior knowledge about objects from LMs and hallucinate objects in future observations.",
"Our experiments showed that HOLM approach improves over various baselines from the literature.",
"Surprisingly, our model which used background knowledge from LMs outperformed models with knowledge from human-annotated data showing that LMs learn useful knowledge about the world without requiring any visual observations.",
"We also showed that out approach generalizes to both indoor and outdoor scenarios.",
"Our work has limitations in the following ways.",
"First, the hallucination process solely conditions on the current field of view.",
"However, the instruction and the previous observations are available to the system.",
"Conditioning on these sources of information could improve the hallucination accuracy by getting more targeted information from the language model.",
"Second, we assume a fixed lexicon of object labels for hallucination.",
"For both the visual side i.e., the object detector, and the language side i.e., the language model, when an unknown object appears the system cannot use this object for hallucination.",
"Another issue is the scalability, i.e, the affinity scores scale with O ( N 2 ) where N is the number of objects, which might be challenging when N is large.",
"We hope the follow-up work could address these limitations.",
"Future work will explore the use of background knowledge in other domains such as vision-and-language navigation (Anderson et al., 2018c) and dialog (Thomason et al., 2020).",
"We also believe background knowledge of objects would be handy in complex scenarios such as manipulating objects in a simulated environment (Shridhar et al., 2020).",
"Our method examines extracting background knowledge in a zero-shot manner.",
"However, the literature shows that learning how to prompt could be helpful in finding better (Liu et al., 2021).",
"We strictly compared unimodal approaches for hallucination.",
"Future work extend our work by comparing multimodal models (Tan and Bansal, 2019; Lu et al., 2019; Sun et al., 2019a; Su et al., 2020; Li et al., 2020; Qi et al., 2020a; Hu and Singh, 2021).",
"Another interesting direction would be to study the capability of transferring knowledge from indoor to outdoor settings and vise versa.",
"Finally, the success of PREVALENT (Hao et al., 2020) and other pre-training approaches for VLN could stem from their ability to implicitly encode prior knowledge about objects.",
"Hopefully, future studies examines this phenomenon.",
"This material is based upon work partially supported by National Science Foundation awards 1722822 and 1750439, and National Institutes of Health awards R01MH125740, R01MH096951 and U01MH116925.",
"Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors, and no official endorsement should be inferred.",
"We also thank anonymous reviewers of ACL Rolling Review for their valuable feedback."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"objective",
"method",
"result",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"We introduce a new neural network architecture, Multimodal Neural Graph Memory Networks (MN-GMN), for visual question answering.",
"The MN-GMN uses graph structure with different region features as node attributes and applies a recently proposed powerful graph neural network model, Graph Network (GN), to reason about objects and their interactions in an image.",
"The input module of the MN-GMN generates a set of visual features plus a set of encoded region-grounded captions (RGCs) for the image.",
"The RGCs capture object attributes and their relationships.",
"Two GNs are constructed from the input module using the visual features and encoded RGCs.",
"Each node of the GNs iteratively computes a question-guided contextualized representation of the vi-sual/textual information assigned to it.",
"Then, to combine the information from both GNs, the nodes write the updated representations to an external spatial memory.",
"The final states of the memory cells are fed into an answer module to predict an answer.",
"Experiments show MN-GMN rivals the state-of-the-art models on Visual7W, VQA-v2.0, and CLEVR datasets.",
"Visual question answering (VQA) has been recently introduced as a grand challenge for AI.",
"Given an image and a free-form question about it, the VQA task is to produce an accurate natural language answer.",
"VQA has many applications, such as image retrieval and search.",
"This paper proposes a new neural network architecture for VQA based on the recent Graph Network (GN) (Battaglia et al., 2018).",
"The pairwise interactions between various regions of an image and spatial context in both horizontal and vertical directions are important to answer questions about objects and their interactions in the scene context.",
"For example, to answer How many cats are in the picture?",
"(see Figure 1), a Figure 1: An example from Visual Genome ( https: //visualgenome.org/ ).",
"model needs to aggregate information from multiple, possibly distant, regions; hence applying a convolutional neural network may not be sufficient to perform reasoning over the regions.",
"Our new architecture (see Figure 2), Multimodal Neural Graph Memory Network (MN-GMN), uses a graph structure to represent pairwise interactions between visual/textual features (nodes) from different regions of an image.",
"GNs provide a context-aware neural mechanism for computing a feature for each node that represents complex interactions with other nodes.",
"This enables our MN-GMN to answer questions that need reasoning about complex arrangements of objects in a scene.",
"Previous approaches such as Memory Networks (MN) (Sukhbaatar et al., 2015) and Dynamic Memory Networks (DMN) (Kumar et al., 2015) combined a memory component and an attention mechanism to reason about a set of inputs.",
"The DMN was first proposed for text QA.",
"The text QA task is composed of a question, and a set of statements, called facts, in the order that describes a short story.",
"Only a subset of the facts is required to answer a question.",
"DMN includes four modules: input, question, episodic memory, and answer.",
"The input and question modules encode the question and the facts.",
"Then, the episodic memory takes as input the question and aggregates the facts to produce a vector representation of the relevant information.",
"This vector is passed to the answer module to predict an answer.",
"Previous applications of the MN and DMN for VQA either represent each image region independently as a single visual fact (Xu and Saenko, 2015) or represent the regions of an image like facts of a story with a linear sequential structure (Xiong et al., 2016).",
"But, whereas a linear order may be sufficient for text QA, it is insufficient to represent the 2D context of an image.",
"The major novel aspect of our approach is that we exploit the flexibility of GNs to combine information from two different sources: visual features from different image regions and textual features based on region-grounded captions (RGCs).",
"An RGC detector is learned by transfer learning from a dataset with region-grounded captions.",
"Like visual features, an RGC is specified with a bounding-box.",
"The RGCs capture object attributes and relationships that are often useful to answer visual questions.",
"For example, in Figure 2, to answer Is the water calm?",
", a wave in the ocean is informative; the water is blue specifies an attribute of water ; surfer riding a wave describe interactions between objects.",
"Captions also incorporate commonsense knowledge.",
"Our multimodal graph memory network comprises a visual GN and a textual GN, one for each information source.",
"Each node of the two GNs iteratively computes a question-guided Figure 2: Multimodal Neural Graph Memory Networks for VQA.",
"contextualized representation of the visual/textual information at the bounding-box assigned to it.",
"The third component in our multimodal graph memory module is an external spatial memory, which is designed to combine information across the modalities.",
"Each node writes the updated representations to the external spatial memory, which is composed of memory cells arranged in a 2D grid.",
"The final state of the memory cells is then fed into the answer module to predict an answer.",
"The external spatial memory resolves the redundancy introduced by overlapping bounding-boxes, which causes dif-ficulties, for example, with counting questions.",
"To summarize, our main contributions are: We introduce a new memory network architecture, based on graph neural networks, which can reason about complex arrangements of objects in a scene to answer visual questions.",
"To the best of our knowledge, this is the first work that explicitly incorporates local textual information (RGCs) of the image via a transfer learning technique into a multimodal memory network to answer visual questions.",
"Our architecture, which can be seen as a multimodal relational extension to DMN, rivals the state-of-the-art on three VQA datasets.",
"An important part of the VQA task is to understand the given question.",
"Most approaches utilize a neural network architecture that can handle sequences of flexible length and learn complex temporal dynamics using a sequence of hidden states.",
"Such architectures include Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM), and the Gated Recurrent Unit (GRU).",
"To encode a given image, most VQA approaches employ a Convolutional Neural Network (CNN) pre-trained on ImageNet, such as VGGNet and ResNet, to extract visual information from an image.",
"These two recent trends of applying CNNs and RNNs have been successfully applied to image captioning and visual grounding (Johnson et al., 2015) tasks.",
"Grounding connects words to their visual meaning.",
"Our approach sees VQA as first grounding the question in the image and then predicting an answer.",
"Most early deep neural-based VQA models produce an answer conditioned on a global visual feature vector and the embedded question.",
"However, since many questions and answers relate to a specific region in an image, these models often cannot predict a precise answer.",
"To overcome this issue, many attention-based models are proposed.",
"The attention-based models compute an attention weight of spatially localized CNN features based on the question to predict an answer (Xu and Saenko, 2015; Xiong et al., 2016).",
"Teney et al. (2018) used the Bottom-Up Attention model (An-derson et al., 2018) to obtain a set of features at different regions of the image and computed an attention weight for each region based on the encoded question to predict an answer.",
"In Lu et al. (2016), the authors proposed a hierarchical co-attention model that jointly implements both image-guided question attention and question-guided visual attention.",
"Fukui et al. (2016) proposed a VQA model based on multimodal compact bilinear (MCB) pooling to get a joint representation for image and question.",
"Similarly, Yu et al. (2018); Kim et al. (2018) utilized higher-order fusion techniques to combine the question with visual features more ef-ficiently.",
"Cadene et al. (2019) proposed a bilinear fusion algorithm to represent interactions between question and image regions.",
"In Jabri et al. (2016), the authors introduced a model called Relation Networks, which uses multilayer perceptron models to reason over all pairs of local image features extracted from a grid of image regions.",
"Dynamic tree structures have been used in VQA to capture the visual context of image objects (Tang et al., 2019).",
"Yi et al. (2018) proposed a model called neural-symbolic visual question answering (NS-VQA).",
"The NS-VQA uses symbolic structure as prior knowledge to answer questions that need complex reasoning.",
"This model first extracts a structural scene representation from the scene and a program trace from the given question.",
"Then, it applies the program to the scene representation to predict an answer.",
"Recently, a few models are proposed which can learn the interactions between image regions.",
"The graph learner model (Norcliffe-Brown et al., 2018) merges a graph representation of the image based on the question with a graph convolutional network, to learn visual features that can represent question specific interactions.",
"Yang et al. (2018) proposed to reason over a visual representation of the image called scene graph which represents objects and their relationships explicitly.",
"Li et al. (2019) introduced a VQA model called Relation-aware Graph Attention Network (ReGAT).",
"Guided by the question, ReGAT encodes an image into a graph that represents relations among visual objects.",
"The ReGAT is trained on Visual Genome dataset (Krishna et al., 2016).",
"Most of the above models need datasets with annotated object relationship triplets for training.",
"Because annotating triplets is difficult, such datasets are relatively small.",
"Instead, our VQA architecture exploits the rich textual information of an image via incorporating the RGCs to learn the attributes of an image region and the interactions between a set of image regions enclosed by an RGC bounding-box.",
"This information is much easier to obtain because large caption datasets are available.",
"More recently, Hudson and Manning (2019a) proposed a model called Neural State Machine (NSM) for the visual questions that need composi-tionality and multi-step inference.",
"Given an image, the NSM first predicts a probabilistic graph as a structured semantic representation of the image.",
"Then, NSM executes sequential reasoning guided by the input question over the predicted graph, by iteratively traversing the nodes of the graph.",
"The authors show that the proposed model can achieve state-of-the-art results on VQA-CP (Agrawal et al., 2018) and GQA (Hudson and Manning, 2019b) datasets.",
"Shrestha et al. (2019) introduced a VQA model called Recurrent Aggregation of Multimodal Embeddings Network (RAMEN), which is suitable for both natural image understanding and the synthetic datasets that need compositional reasoning.",
"The RAMEN processes visual and question features in three steps: early fusion of spatially localized image features with question features, learning bimodal embeddings, and aggregating them across the image by applying a bidirectional GRU to capture the interactions between bimodal embeddings.",
"In this section, we briefly explain the graph networks (GN) framework (Battaglia et al., 2018).",
"The GN extends several other graph neural networks such as message-passing neural networks (Gilmer et al., 2017), and non-local neural networks (Wang et al., 2018).",
"In a GN framework, a graph is represented by a 3 -tuple G = ( u , V , E ) , where u is a graph-level attribute.",
"The V = { v i } i =1: N is a set of node attributes, where v i is a node attribute of node i , and N is the number of nodes.",
"The E = { ( e k , r k , s k ) } k =1: M is a set of edges, where e k is an edge attribute for the edge going from node s k to node r k , and M is the number of edges.",
"A GN block has three update functions and three aggregation functions .",
"Given an input graph, a GN block updates the graph using the update and aggregation functions.",
"The computational steps in a GN are represented in Algorithm",
"1. The function e is mapped over entire edges to calculate per-edge updates, v is mapped over entire nodes to calculate per-node updates, and u is used to update the global attribute.",
"The 's should be unvarying to permutations of their inputs and must be flexible to a varying number of arguments, such as maximum, summation, etc.",
"Figure 2 shows our MN-GMN architecture, which is composed of four modules: input, question, multimodal graph memory network, and answer.",
"We now describe these modules.",
"The input module has two components: A deep CNN, e.g., Bottom-Up Attention (Anderson et al., 2018), ResNet (He et al., 2015), etc. and a region-grounded caption (RGC) encoder which encodes the RGCs.",
"The RGCs are generated by a dense captioning model.",
"Then, they are encoded with a GRU and a parser (Schuster et al., 2015).",
"The RGCs are useful to answer questions about object attributes and their relationships.",
"We now describe the details and motivation for these components.",
"Visual Feature Extraction.",
"To extract visual features, we use the Bottom-Up Attention model.",
"The features are obtained via Faster R-CNN and 101 -layer ResNet, which attend to specific image regions.",
"Using a fixed threshold on object detection, we extract N 2048 -dimensional image features from N different regions of the image.",
"The value of N depends on the image and ranges from 10 to 100 .",
"Each feature vector has a bounding-box specified by its coordinates r = ( r x , r y , r x (cid:48) , r y (cid:48) ) , where ( r x , r y ) and ( r x (cid:48) , r y (cid:48) ) are the top-left and bottom-right corners of the bounding-box which are normalized to have a values between 0 and 1 based on the height and width of the image.",
"We concatenate each feature vector with its bounding-box to obtain a vector denoted by x i , ( i = 1 , . . . , N ) .",
"Note that x i only describes the image at its bounding-box without exploiting the global spatial context.",
"Captions.",
"To extract a set of RGCs for the image, we use a dense captioning model proposed by Johnson et al. (2015).",
"This model contains a CNN, a dense localization layer, and an RNN language model that generates the captions ( https:// github.com/jcjohnson/densecap ).",
"The model is trained on RGCs from the Visual Genome dataset.",
"The training set that we use does not include VQA-v2.0/Visual7W test images.",
"Through transfer learning, our model is leveraging the caption annotations.",
"Each RGC has a caption, a bounding-box, and a confidence score.",
"To encode a caption, we first create a dictionary using all words in the captions and questions.",
"We preprocess the captions and questions with basic tokenization by converting all sentences to lower case and throwing away non-alphanumeric characters.",
"We map the words to a dense vector representation using a trainable word embedding matrix L L D , where D is the dimensionality of the semantic space, and L is the size of the dictionary.",
"To initialize the word embeddings, we use the pretrained GloVe vectors.",
"The words that don't occur in the pretrained word embedding model are initialized with zeros.",
"We encode a caption using a GRU and a parser.",
"The parser takes a caption and parses it into a set of objects with their attributes and a set of relationship triplets.",
"The encoded RGC is a vector representation denoted by x RD .",
"See appendix A for more detail about the RGC encoding.",
"We encode a question using the same dictionary as we use for captions.",
"This enables our model to match the words in a caption with the words in a question and attend to the relevant caption.",
"The final hidden state of a GRU, denoted by q , is used as the representation of the question.",
"Given a set of visual feature vectors, a set of encoded RGCs, and the encoded question, the multimodal graph memory network module produces a representation of the relevant information based on the encoded question.",
"The memory chooses which parts of the inputs to focus on using an attention mechanism.",
"Unlike previous work (Xu and Saenko, 2015; Xiong et al., 2016), our memory network module is multimodal and relational.",
"That is, it employs both textual and visual information of the input image regions, and it exploits pair-wise interactions between each pair of visual/textual features using a visual/textual GN.",
"Similar to visual features, most of the RGCs may be irrelevant to the given question.",
"Thus, the memory module needs to learn an attention mechanism for focusing on the relevant RGCs.",
"Formally, the multimodal graph memory network is composed of a visual GN G = ( u , V , E ) with N nodes, a textual GN G = ( u , V , E ) with N nodes, and an external spatial memory.",
"Each node of the visual GN represents a visual feature with an associated bounding-box.",
"Similarly, each node of the textual GN has a bounding-box corresponds to a detected RGC of the image.",
"In both GNs, we connect two nodes via two forward and backward edges if they are nearby.",
"That is, we connect two nodes if the Euclidean distance between the normalized center of their bounding-boxes is less than = 0 .",
"5 .",
"Note that even if two nodes of a GN are not neighbors, they may still communicate via the message passing mechanism of the GN.",
"The external memory is a network of memory cells arranged in a P Q grid.",
"Each cell has a fixed location that corresponds to a specific ( H/P ) ( W/Q ) region in the image, where H and W are height and width of the image.",
"Each node of the visual/textual GN sends its information to a memory cell if its bounding-box covers the location of the cell.",
"Since the bounding-boxes may overlap, a cell may get information from multiple nodes.",
"The external memory network is responsible for aggregating the information from both GNs and eliminating redundancy introduced by overlapping bounding-boxes.",
"This makes our architecture less sensitive to the number of detected bounding-boxes.",
"Since the input to the spatial memory is the output of the GNs, the state of the GN nodes can be seen as an internal memory, and the state of the spatial memory can be seen as an external memory like Neural Turing Machines (Graves et al., 2014).",
"Initialization.",
"To initialize each node attribute of the visual GN, we combine a visual feature vector extracted from a region of the image with the encoded question using MCB pooling as v i = q (cid:63) x i , where (cid:63) represents the MCB pooling.",
"Similarly, we initialize each node attribute of the textual GN as v i = q (cid:12) x i , where (cid:12) is the element-wise multiplication.",
"We use the MCB to combine the visual features with the encoded question since the question and visual features are from different modalities.",
"The global attribute u is initialized by a global feature vector of the image extracted from the last layer of the 101 -layer ResNet.",
"This helps to answer questions that need the global features of the scene.",
"The global attribute u is initialized with the encoded question.",
"The edge features of the GNs and memory cells are initialized with zero vectors.",
"Updates.",
"At each iteration, we first update the GNs.",
"Then, we update the content of the memory cells.",
"We update the edge attributes, node attributes, and global attribute of both GNs as described in Algorithm",
"1. For each GN, we use three different GRUs to implement the functions e , v , and u .",
"The e v is an element-wise summation.",
"The v u and e u for visual GN are implemented as v (cid:48) = (cid:0) (cid:80) i ( W 1 v i + b 1 ) (cid:12) ( W 2 v i + b 2 ) (cid:1) e (cid:48) = (cid:0) (cid:80) k ( W 3 e k + b 3 ) (cid:12) ( W 4 e k + b 4 ) (cid:1) where, and are the sigmoid and tangent hyperbolic activation functions, and W i , b i , i = 1 , . . . , 4 , are trainable parameters.",
"This allows to incorporate information from the question for computing the attention weights using the sigmoid function for each node/edge.",
"The v u and e u for the textual GN are implemented in a similar way.",
"Let, v p,q = 1 |N p,q | (cid:80) i N p,q v i and v p,q = 1 | N p,q | (cid:80) i N p,q v i , where N p,q and N p,q are the set of nodes which are connected to the memory cell ( p, q ) in the visual and textual GNs, respectively.",
"Each memory cell is updated as m p,q = f ( m p 1 ,q , m p,q 1 , m p,q +1 , m p +1 ,q ) m (cid:48) p,q = GRU ([ v p,q , v p,q , m p,q ] , m p,q ) where f is a neural network layer which aggregates the memories from the neighboring cells.",
"We repeat these steps for two iterations.",
"Applying one iteration decreases the accuracy by about 2 .",
"0 points.",
"As observed by Kumar et al. (2015), iterating over the inputs allows the memory network to take several reasoning steps which some questions require.",
"The answer module predicts an answer using a GN called answer GN.",
"The nodes of the answer GN are the external spatial memory cells.",
"However, there is an edge between every ordered pair of the nodes (cells), hence the answer GN is a complete graph.",
"This supports reasoning across distant regions of the image.",
"Let m p,q be the final state of the memory cell at location ( p, q ) .",
"We initialize the node attributes of the answer GN denoted by v p,q as v p,q = m p,q .",
"The edge attributes are initialized using the one-hot representation of the location of the sender and receiver memory cells.",
"That is, the edge attribute of the edge going from the memory cell at location ( p, q ) to ( p (cid:48) , q (cid:48) ) , is initialized with a vector of size 2 P + 2 Q which is computed by concatenating the one-hot representation of p, q, p (cid:48) , and q (cid:48) .",
"The global attribute of the answer GN is initialized with a vector of zeros.",
"Then, we update the edge attributes, the node attributes and the global attribute of the answer GN as described in Algorithm",
"1. As before, we use three different GRUs to implement functions e , v , and u .",
"The e v is a simple element-wise summation.",
"The v u and e u are implemented as before, but with different set of parameters.",
"The answer module predicts an answer as p = (cid:0) W g ( u ) + W g ( u ) + b (cid:1) where, u is the updated global attribute of the answer GN, W RY 2048 , W RY 300 , b RY are trainable parameters, g , g are non-linear layers, and Y is the number of possible answers.",
"Following Teney et al. (2018), to exploit prior linguistic information about the candidate answers, the GloVe embeddings of the answer words are used to initialize the rows of the W .",
"Initialization with the Glove embeddings improves the performance by about 1 .",
"0 point.",
"Similarly, to utilize prior visual information about the candidate answers, a visual embedding is used to initialize the rows of W .",
"The visual embedding is obtained by retrieving 10 image from Google Images for each word.",
"Then, the images are encoded using the ResNet101 pretrained on ImageNet to obtain a feature vector of size 2048 .",
"For each word, the average of the feature vectors is used to initialize a row of W .",
"The loss for a single sample is defined as L = (cid:80) Yi =1 p i log( p i ) + (1 p i ) log(1 p i ) where, p i is the i th element of p , and p i is the i th element of the ground-truth vector p ( p i = 1 . 0 if A 3 annotators give the i th answer word, otherwise p i = A/ 3 ).",
"For multiple choice task, the candidate answers are encoded by the last state of a GRU and concatenated with u using a neural network layer as p = (cid:0) w f ([ u , a ]) + b (cid:1) where, a is an encoded answer choice, f is a non-linear layer, and w , b are trainable parameters.",
"For multiple choice task, the binary logistic loss p log( p ) (1 p ) log(1 p ) is used, where p is 1 .",
"0 for an ( image,question,answer ) triplet, if the answer choice is correct, otherwise p is 0 .",
"Training Details and Optimization.",
"The MN-GMN is implemented in TensorFlow.",
"We use a library from https://github.com/deepmind/ graph_nets to implement the GNs.",
"We follow VQA tips in Teney et al. (2018) to train our models.",
"More specifically, to apply an ensemble technique, 20 instances of the model is trained with various initial random seeds.",
"For test images, the scores for the answers by all models are summed, and the answer is predicted using the highest summed score.",
"To minimize the loss, we apply the RMSprop optimization algorithm with a learning rate of 0 .",
"0001 and minibatches of size 100 .",
"Dropout with probability 0 .",
"5 and early stopping are applied to prevent overfitting.",
"Dropout is used after the layer that computes the updated global attribute of the answer GN.",
"During training, all parameters are tuned except for the weights of the CNN and RGC detector to avoid overfitting.",
"For VQA-v2.0 and Visual7W datasets, we augment the training dataset with Visual Genome/GQA images and QA pairs.",
"The training set that we use does not include the VQA-v2.0/Visual7W test or Visual7W validation images.",
"The output dimension of the MCB and the dimension of the hidden layer in both RGC and question GRUs are set to 512 .",
"Also, we set P, Q = 14 and D = 512 .",
"The full model takes around 6 hours to train on two Titan X GPUs.",
"We explain the datasets, baseline models, and evaluation metric that we use in our experiments.",
"Then, the experimental results are discussed.",
"Datasets.",
"VQA-v2.0 (Antol et al., 2015) includes 82 , 783 training images, 40 , 504 validation images, and 81 , 434 testing images.",
"There are 443 , 757 training questions, 214 , 354 validation questions, and 447 , 793 test questions in this dataset.",
"A subset of the standard test set, called test-dev, contains 107 , 394 questions.",
"Each question has 10 candidate answers generated by humans.",
"We choose correct answers that appear more than 8 times.",
"This makes Y = 3 , 110 candidate answers.",
"We use the standard metric (Antol et al., 2015), which is an answer is correct if at least 3 people agree.",
"Visual7W dataset (Zhu et al., 2015) includes 47 , 300 images.",
"We train and evaluate our model on telling questions of the Visual7W which includes 28 , 653 images.",
"This set uses six types of questions: what ( 6% ), where ( 48% ), when ( 16% ), who ( 5% ), why ( 10% ), how ( 15% ).",
"The training, validation and test splits, contain 50% , 20% , 30% of the QA pairs, respectively.",
"For evaluation, Visual7W provides four candidate answers.",
"The Visual7W has fewer language biases compared to VQA.",
"We also experiment on CLEVR dataset (Johnson et al., 2017a).",
"CLEVR evaluates different aspects of visual reasoning, such as attribute recognition, counting, comparison, logic, and spatial relationships.",
"Each object in an image has the following attributes: shape ( cube , sphere , or cylinder ), size ( large or small ), color ( 8 colors), and material ( rubber or metal ).",
"An object detector with 96 classes is trained using all combinations of the attributes by the Tensorflow Object Detection API.",
"We use Faster R-CNN NasNet trained on the MS-COCO dataset as the pretrained model.",
"Given an image, the output of the object detector is a set of object bounding-boxes with their feature vectors.",
"For CLEVR, we omit the textual GN, since CLEVR images do not have rich textual information.",
"Baselines.",
"We compare our model with several architectures developed recently, including the state-of-the-art models ReGAT, BAN, VCTREE, and MuRel.",
"For comparison, we also include three related models in Table 1 that have been proposed more recently in Arxiv preprints during the preparation of this work: LXRT, MSM@MSRA, and MIL@HDU.",
"The ReGAT exploits supervision from Visual Genome relationships.",
"MAN is a memory-augmented neural network which attends to each training exemplar to answer visual questions, even when the answers infrequently happen in the training set.",
"The Count (Zhang et al., 2018) is a neural network model designed to count objects from object proposals.",
"For Visual7W, we compare our models with Zhu et al. (2015), MCB, MAN, and MLP.",
"The MCB leverages the Visual Genome QA pairs as additional training data and the 152 -layer ResNet as a pretrained model.",
"The MLP method uses ( image,question,answer ) triplets to score answer choices.",
"For CLEVR, we compare our models with several baselines proposed by Johnson et al. (2017a) as well as the state-of-the-art models RAMEN, PROGRAM-GEN, and NS-VQA.",
"N2NMN learns to predict a layout based on the question and compose a network using a set of neural modules.",
"The CNN+LSTM+RN learns to infer a relation using a neural network model called Relation Networks.",
"The PROGRAM-GEN exploits supervision from functional programming, which is used to generate CLEVR questions.",
"Ablation Study.",
"We implement several lesion architectures.",
"The MN+ResNet model does not use any GNs and is designed to evaluate the effect of using GN.",
"This model is similar to MN (Sukhbaatar et al., 2015).",
"It applies a soft attention for 14 14 ResNet feature maps (the last 14 14 pooling layer) and generates a representation u = h ( q ) (cid:63) h (cid:48) ( (cid:80) 196 i =1 i x i ) .",
"Here h, h (cid:48) are non-linear layers, and i is an attention weight computed as i = softmax (cid:0) w h (cid:48)(cid:48) ([ x i , q ]) (cid:1) , where w is a learned parameter vector and h (cid:48)(cid:48) is a non-linear layer.",
"Then, an answer is predicted as described before.",
"The N-GMN model only uses the visual GN (no textual GN nor spatial memory).",
"This model evaluates the effect of incorporating RGCs.",
"After two iterations, the global feature vector of the visual GN is used as u to generate an answer.",
"The N-GMN + model only uses the visual GN and the external spatial memory components (no textual GN).",
"This model is used for the CLEVR dataset since CLEVR images do not have rich textual information.",
"The MN-GMN model does not use the external spatial memory.",
"After two iterations, the global feature vector of the visual and textual GNs are concatenated and fed into a non-linear layer to generate u .",
"Finally, MN-GMN is our full model.",
"Results and Discussion.",
"Our experimental results on VQA-v2.0 dataset are reported in Table",
"1. For LXRT, MSM@MSRA, and MIL@HDU, the numbers are reported from the VQA Challenge 2019 Leaderboard (using an ensemble of models).",
"Across all question types, N-GMN outperforms MN+ResNet.",
"This shows that applying the visual GN with explicit object bounding-boxes provides a usefully richer representation than a grid of fixed visual features.",
"MN-GMN outperforms N-GMN.",
"This shows that RGCs help to improve accuracy.",
"RGCs are especially useful for answering the Other and Yes/No question types.",
"Our full model MN-GMN outperforms MN-GMN .",
"This shows that applying external spatial memory is effective, especially for Number questions.",
"The full model's accuracy is higher than the baselines.",
"Our results on Visual7W are reported in Table",
"2. Our N-GMN, MN-GMN , and MN-GMN outperform the baselines MLP, MAN, and MCB+ATT.",
"The results for our N-GMN + on CLEVR in Table 3 are competitive with the state-of-the-art RAMEN, PROGRAM-GEN, and NS-VQA.",
"We emphasize that, unlike PROGRAM-GEN, our algorithm does not exploit supervision from functional programming.",
"Also, unlike NS-VQA, our model is not tailored to synthetic datasets only, since it performs well on both natural and artificial datasets that need multi-step compositional reasoning.",
"Figure 3 shows how MN-GMN can answer a question correctly by incorporating RGCs, whereas N-GMN gives the wrong answer.",
"Figure 4 illustrates the visualization of the attention weights with MN-GMN to answer a Number question.",
"We compute the attention weights that are used to obtain v (cid:48) for each spatial memory cell.",
"More precisely, the magnitude of the sigmoid output that implements v u for the spatial memory is visualized.",
"Each attention weight shows the importance of a fixed region in a 14 14 grid of cells to the question.",
"Figure 5 shows a VQA example on the CLEVR dataset.",
"Appendix B provides more examples.",
"Multi-modal Neural Graph Memory Networks are a new architecture for the VQA task.",
"The MN-GMN represents bimodal local features as node attributes in a graph.",
"It leverages a graph neural network model, Graph Network, to reason about objects and their interactions in a scene.",
"In experiments on three datasets, the MN-GMN showed superior quantitative and qualitative performance compared to the lesion approaches and rivals the state-of-the-art models.",
"A future research direction is to combine RGCs with distant supervision by an external knowledge base to answer the visual questions that need external knowledge; for example Which animal in this photo can climb a tree?"
] | [
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Distantly-labeled data can be used to scale up training of statistical models, but it is typically noisy and that noise can vary with the distant labeling technique.",
"In this work, we propose a two-stage procedure for handling this type of data: denoise it with a learned model, then train our final model on clean and denoised distant data with standard supervised training.",
"Our denoising approach consists of two parts.",
"First, a filtering function discards examples from the distantly labeled data that are wholly unusable.",
"Second, a relabeling function repairs noisy labels for the retained examples.",
"Each of these components is a model trained on synthetically-noised examples generated from a small manually-labeled set.",
"We investigate this approach on the ultra-fine entity typing task of Choi et al. (2018).",
"Our baseline model is an extension of their model with pre-trained ELMo representations, which already achieves state-of-the-art performance.",
"Adding distant data that has been denoised with our learned models gives further performance gains over this base model, outperforming models trained on raw distant data or heuristically-denoised distant data.",
"With the rise of data-hungry neural network models, system designers have turned increasingly to unlabeled and weakly-labeled data in order to scale up model training.",
"For information extraction tasks such as relation extraction and entity typing, distant supervision (Mintz et al., 2009) is a powerful approach for adding more data, using a knowledge base (Del Corro et al., 2015; Rabinovich and Klein, 2017) or heuristics (Ratner et al., 2016; Hancock et al., 2018) to automatically label instances.",
"One can treat this data just like any other supervised data, but it is noisy; more effective approaches employ specialized probabilistic models (Riedel et al., 2010; Ratner et al., 2018a), capturing its interaction with other supervision (Wang and Poon, 2018) or breaking down aspects of a task on which it is reliable (Ratner et al., 2018b).",
"However, these approaches often require sophisticated probabilistic inference for training of the final model.",
"Ideally, we want a technique that handles distant data just like supervised data, so we can treat our final model and its training procedure as black boxes.",
"This paper tackles the problem of exploiting weakly-labeled data in a structured setting with a two-stage denoising approach.",
"We can view a distant instance's label as a noisy version of a true underlying label.",
"We therefore learn a model to turn a noisy label into a more accurate label, then apply it to each distant example and add the resulting denoised examples to the supervised training set.",
"Critically, the denoising model can condition on both the example and its noisy label, allowing it to fully leverage the noisy labels, the structure of the label space, and easily learnable correspondences between the instance and the label.",
"Concretely, we implement our approach for the task of fine-grained entity typing, where a single entity may be assigned many labels.",
"We learn two denoising functions: a relabeling function takes an entity mention with a noisy set of types and returns a cleaner set of types, closer to what manually labeled data has.",
"A filtering function discards examples which are deemed too noisy to be useful.",
"These functions are learned by taking manually-labeled training data, synthetically adding noise to it, and learning to denoise, similar to a conditional variant of a denoising autoencoder (Vincent et al., 2008).",
"Our denoising models embed both entities and labels to make their predictions, mirroring the structure of the final entity typing model itself.",
"tity typing scenario and use the same two distant supervision sources as them, based on entity linking and head words.",
"On top of an adapted model from Choi et al. (2018) incorporating ELMo (Pe-ters et al., 2018), navely adding distant data actually hurts performance.",
"However, when our learned denoising model is applied to the data, performance improves, and it improves more than heuristic denoising approaches tailored to this dataset.",
"Our strongest denoising model gives a gain of 3 F 1 absolute over the ELMo baseline, and a 4.4 F 1 improvement over naive incorporation of distant data.",
"This establishes a new state-of-the-art on the test set, outperforming concurrently published work (Xiong et al., 2019) and matching the performance of a BERT model (Devlin et al., 2018) on this task.",
"Finally, we show that denoising helps even when the label set is projected onto the OntoNotes label set (Hovy et al., 2006; Gillick et al., 2014), outperforming the method of Choi et al. (2018) in that setting as well.",
"We consider the task of predicting a structured target y associated with an input x .",
"Suppose we have high-quality labeled data of n (input, target) pairs D = { (cid:0) x (1) , y (1) (cid:1) , . . . , ( x ( n ) , y ( n ) ) } , and noisily labeled data of n (cid:48) (input, target) pairs D (cid:48) = { ( x (1) , y (1) noisy ) , . . . , ( x ( n (cid:48) ) , y ( n (cid:48) ) noisy ) } .",
"For our tasks, D is collected through manual annotation and D (cid:48) is collected by distant supervision.",
"We use two models to denoise data from D (cid:48) : a filtering function f disposes of unusable data (e.g., mislabeled examples) and a relabeling function g transforms the noisy target labels y noisy to look more like true labels.",
"This transformation improves the noisy data so that we can use it to D without introducing damaging amounts of noise.",
"In the second stage, a classification model is trained on the augmented data ( D combined with denoised D (cid:48) ) and predicts y given x in the inference phase.",
"The primary task we address here is the fine-grained entity typing task of Choi et al. (2018).",
"Instances in the corpus are assigned types from a vocabulary of more than 10,000 types, which are divided into three classes: 9 general types, 121 fine-grained types, and 10 , 201 ultra-fine types.",
"This dataset consists of 6K manually annotated examples and approximately 25M distantly-labeled examples.",
"5M examples are collected using entity linking (EL) to link mentions to Wikipedia and gather types from information on the linked pages.",
"20M examples (HEAD) are generated by extracting nominal head words from raw text and treating these as singular type labels.",
"Figure 1 shows examples from these datasets which illustrate the challenges in automatic annotation using distant supervision.",
"The manually-annotated example in",
"(a) shows how numerous the gold-standard labeled types are.",
"By contrast, the HEAD example",
"(b) shows that simply treating the head word as the type label, while correct in this case, misses many valid types, including more general types.",
"The EL example",
"(c) is incorrectly annotated as region , whereas the correct coarse type is actually person .",
"This error is characteristic of entity linking-based distant supervision since identifying the correct link is a challenging problem in and of itself (Milne and Witten, 2008): in this case, Gascoyne is also the name of a region in Western Australia.",
"The EL example in",
"(d) has reasonable types; however, human annotators could choose more types (grayed out) to describe the mention more precisely.",
"The average number of types annotated by humans is 5 .",
"4 per example while the two distant supervision techniques combined yields 1 .",
"5 types per example on average.",
"In summary, distant supervision can (1) produce Djokovic lost to [ Rafael Nadal ] on Monday, ... player tennis_player A person who participates in or is skilled at some game An athlete who plays tennis + + Rafael Nadal Good Bad tennis player winner athlete baseball player person Yes Yes Yes No Yes Filter Relabel Figure 2: Denoising models.",
"completely incorrect types, and (2) systematically miss certain types.",
"To handle the noisy data, we propose to learn a denoising model as shown in Figure 2.",
"This denoising model consists of filtering and relabeling functions to discard and relabel examples, respectively; these rely on a shared mention encoder and type encoder, which we describe in the following sections.",
"The filtering function is a binary classifier that takes these encoded representations and predicts whether the example is good or bad.",
"The relabeling function predicts a new set of labels for the given example.",
"We learn these functions in a supervised fashion.",
"Training data for each is created through synthetic noising processes applied to the manually-labeled data, as described in Sections 3.3 and 3.4.",
"For the entity typing task, each example ( x, y ) takes the form (( s, m ) , t ) , where s is the sentence, m is the mention span, and t is the set of types (either clean or noisy).",
"This encoder is a function m ( s, m ) which maps a sentence s and mention m to a real-valued vector v m .",
"This allows the filtering and relabeling function to recognize inconsistencies between the given example and the provided types.",
"Note that these inputs s and m are the same as the inputs for the supervised version of this task; we can therefore share an encoder architecture between our denoising model and our final typing model.",
"We use an encoder following Choi et al. (2018) with a few key differences, which are described in Section 4.",
"The second component of our model is a module which produces a vector v t = t ( t ) .",
"This is an encoder of an unordered bag of types.",
"Our basic type encoder uses trainable vectors as embeddings for each type and combines these with summing.",
"That is, the noisy types t 1 , . . . , t m are embedded into type vectors { t 1 , . . . , t m } .",
"The final embedding of the type set t = (cid:80) j t j .",
"Type Definition Encoder Using trainable type embeddings exposes the denoising model to potential data sparsity issues, as some types appear only a few or zero times in the training data.",
"Therefore, we also assign each type a vector based on its definition in WordNet (Miller, 1995).",
"Even low-frequent types are therefore assigned a plausible embedding.",
"1 Let w ji denote the i th word of the j th type's most common WordNet definition.",
"Each w ji is embedded using GloVe (Pennington et al., 2014).",
"The resulting word embedding vectors w ji are fed into a bi-LSTM (Hochreiter and Schmidhu-ber, 1997; Graves and Schmidhuber, 2005), and a concatenation of the last hidden states in both directions is used as the definition representation w j .",
"The final representation of the definitions is the sum over these vectors for each type: w = 1 We found this technique to be more effective than using pretrained vectors from GloVe or ELMo.",
"It gave small improvements on an intrinsic evaluation over not incorporating it; results are omitted due to space constraints.",
"k w .",
"Our final v t = [ t ; w ] , the concatenation of the type and definition embedding vectors.",
"The filtering function f is a binary classifier designed to detect examples that are completely mislabeled.",
"Formally, f is a function mapping a labeled example ( s, m, t ) to a binary indicator z of whether this example should be discarded or not.",
"In the forward computation, the feature vectors v m and v t are computed using the mention and type encoders.",
"The model prediction is defined as P ( error ) = (cid:0) u (cid:62) Highway ([ v m ; v t ]) (cid:1) , where is a sigmoid function, u is a parameter vector, and Highway ( ) is a 1-layer highway network (Srivas-tava et al., 2015).",
"We can apply f to each distant pair in our distant dataset D (cid:48) and discard any example predicted to be erroneous ( P ( error ) > 0 .",
"5 ).",
"Training data We do not know a priori which examples in the distant data should be discarded, and labeling these is expensive.",
"We therefore construct synthetic training data D error for f based on the manually labeled data D .",
"For 30% of the examples in D , we replace the gold types for that example with non-overlapping types taken from another example.",
"The intuition for this procedure follows Figure 1: we want to learn to detect examples in the distant data like Gascoyne where heuristics like entity resolution have misfired and given a totally wrong label set.",
"Formally, for each selected example (( s, m ) , t ) , we repeatedly draw another example (( s (cid:48) , m (cid:48) ) , t (cid:48) ) from D until we find t (cid:48) error that does not have any common types with t .",
"We then create a positive training example (( s, m, t (cid:48) error ) , z = 1) .",
"We create a negative training example (( s, m, t ) , z = 0) using the remaining 70% of examples.",
"f is trained on D error using binary cross-entropy loss.",
"The relabeling function g is designed to repair examples that make it through the filter but which still have errors in their type sets, such as missing types as shown in Figure 1b and 1d.",
"g is a function from a labeled example ( s, m, t ) to an improved type set t for the example.",
"Our model computes feature vectors v m and v t by the same procedure as the filtering function f .",
"The decoder is a linear layer with parameters D R | V t | ( d m + d t ) .",
"We compute e = ( D [ v m ; v t ]) , where is an element-wise sigmoid operation designed to give binary probabilities for each type.",
"Once g is trained, we make a prediction t for each ( s, m, t ) D (cid:48) and replace t by t to create the denoised data D (cid:48) denoise = { ( s, m, t ) , . . . } .",
"For the final prediction, we choose all types t (cid:96) where e (cid:96) > 0 .",
"5 , requiring at least two types to be present or else we discard the example.",
"Training data We train the relabeling function g on another synthetically-noised dataset D drop generated from the manually-labeled data D .",
"To mimic the type distribution of the distantly-labeled examples, we take each example ( s, m, t ) and randomly drop each type with a fixed rate 0 .",
"7 independent of other types to produce a new type set t (cid:48) .",
"We perform this process for all examples in D and create a noised training set D drop , where a single training example is (( s, m, t (cid:48) ) , t ) .",
"g is trained on D (cid:48) drop with a binary classification loss function over types used in Choi et al. (2018), described in the next section.",
"In this section, we define the sentence and mention encoder m , which is use both in the denoising model as well as in the final prediction task.",
"We extend previous attention-based models for this task (Shimaoka et al., 2017; Choi et al., 2018).",
"At a high level, we have an instance encoder m that returns a vector v m R d , then multiply the output of this encoding by a matrix and apply a sigmoid to get a binary prediction for each type as a probability of that type applying.",
"Figure 3 outlines the overall architecture of our typing model.",
"The encoder m consists of four vectors: a sentence representation s , a word-level mention representation m word , a character-level mention representation m char , and a headword mention vector m head .",
"The first three of these were employed by Choi et al. (2018).",
"We have modified the mention encoder with an additional bi-LSTM to better encode long mentions, and additionally used the headword embedding directly in order to focus on the most critical word.",
"These pieces use pretrained contextualized word embeddings (ELMo) (Peters et al., 2018) as input.",
"using ELMo; let s (cid:48) i R d ELMo denote the embedding of the i th word.",
"As suggested in Peters et al. (2018), we learn task specific parameters task R and s task R 3 governing these embeddings.",
"We do not fine-tune the parameters of the ELMo LSTMs themselves.",
"Sentence Encoder Following Choi et al. (2018), we concatenate the m th word vector s m in the sentence with a corresponding location embedding (cid:96) m R d loc .",
"Each word is assigned one of four location tokens, based on whether (1) the word is in the left context, (2) the word is the first word of the mention span, (3) the word is in the mention span (but not first), and (4) the word is in the right context.",
"The input vectors [ s (cid:48) ; (cid:96) ] are fed into a bi-LSTM encoder, with hidden dimension is d hid , followed by a span attention layer (Lee et al., 2017; Choi et al., 2018): s = Attention ( bi-LSTM ([ s (cid:48) ; l ])) , where s is the final representation of the sentence s .",
"Mention Encoder To obtain a mention representation, we use both word and character information.",
"For the word-level representation, the mention's contextualized word vectors m (cid:48) are fed into a bi-LSTM with hidden dimension is d hid .",
"The concatenated hidden states of both directions are summed by a span attention layer to form the word-level mention representation: m word = Attention ( bi-LSTM ( m (cid:48) )) .",
"Second, a character-level representation is computed for the mention.",
"Each character is embedded and then a 1-D convolution (Collobert et al., 2011) is applied over the characters of the mention.",
"This gives a character vector m char .",
"Finally, we take the contextualized word vector of the headword m head as a third component of our representation.",
"This can be seen as a residual connection (He et al., 2016) specific to the mention head word.",
"We find the headwords in the mention spans by parsing those spans in isolation using the spaCy dependency parser (Honnibal and Johnson, 2015).",
"Empirically, we found this to be useful on long spans, when the span attention would often focus on incorrect tokens.",
"The final representation of the input x is a concatenation of the sentence, the word& character-level mention, and the mention headword representations, v = (cid:2) s ; m word ; m char ; m head (cid:3) R d .",
"Decoder We treat each label prediction as an independent binary classification problem.",
"Thus, we compute a score for each type in the type vocabulary V t .",
"Similar to the decoder of the relabeling function g , we compute e = ( E v ) , where E R | V t | d and e R | V t | .",
"For the final prediction, we choose all types t (cid:96) where e (cid:96) > 0 .",
"5 .",
"If none of e (cid:96) is greater than 0 .",
"5 , we choose t (cid:96) = arg max e (the single most probable type).",
"Loss Function We use the same loss function as Choi et al. (2018) for training.",
"This loss partitions the labels in general, fine, and ultra-fine classes, and only treats an instance as an example for types of the class in question if it contains a label for that class.",
"More precisely: L = L general 1 general ( t ) + L fine 1 fine ( t ) + L ultra-fine 1 ultra-fine ( t ) , (1) where L ... is a loss function for a specific type class: general, fine-grained, or ultra-fine, and 1 ... ( t ) is an indicator function that is active when one of the types t is in the type class.",
"Each L ... is a sum of binary cross-entropy losses over all types in that category.",
"That is, the typing problem is viewed as independent classification for each type.",
"Note that this loss function already partially repairs the noise in distant examples from missing labels: for example, it means that examples from HEAD do not count as negative examples for general types when these are not present.",
"However, we show in the next section that this is not suffi-cient for denoising.",
"hyper-parameters in our model largely follows Choi et al. (2018) and recommendations for using the pretrained",
"pretrained ELMo-Small model.",
"2 The word embedding size d ELMo is 1024 .",
"The type embedding size and the type definition embedding size are set to 1024 .",
"For most of other model hyperparameters, we use the same settings as Choi et al. (2018): d loc = 50 , d hid = 100 , d char = 100 .",
"The number of filters in the 1-d convolutional layer is 50 .",
"Dropout is applied with p = 0 .",
"2 for the pretrained embeddings, and p = 0 .",
"5 for the mention representations.",
"We limit sentences to 50 words and mention spans to 20 words for computational reasons.",
"The character CNN input is limited to 25 characters; most mentions are short, so this still captures subword information in most cases.",
"The batch size is set to 100 .",
"For all experiments, we use the Adam optimizer (Kingma and Ba, 2014).",
"The initial learning rate is set to 2e-03.",
"We implement all models 3 using PyTorch.",
"To use ELMo, we consult the AllenNLP source code.",
"Ultra-Fine Entity Typing We evaluate our approach on the ultra-fine entity typing dataset from Choi et al. (2018).",
"The 6K manually-annotated English examples are equally split into the training, development, and test examples by the authors of the dataset.",
"We generate synthetically-noised data, D error and D drop , using the 2K training set to train the filtering and relabeling functions, f and h .",
"We randomly select 1M EL and 1M HEAD examples and use them as the noisy data D (cid:48) .",
"Our augmented training data is a combination of the manually-annotated data D and D (cid:48) denoised .",
"OntoNotes In addition, we investigate if denoising leads to better performance on another dataset.",
"We use the English OntoNotes dataset (Gillick et al., 2014), which is a widely used benchmark for fine-grained entity typing systems.",
"The original training, development, and test splits contain 250K, 2K, and 9K examples respectively.",
"Choi et al. (2018) created an augmented training set that has 3.4M examples.",
"We also construct our own augmented training sets with/without denoising using our noisy data D (cid:48) , using the same label mapping from ultra-fine types to OntoNotes types described in Choi et al. (2018).",
"We first compare the performance of our approach to several benchmark systems, then break down the improvements in more detail.",
"We use the model architecture described in Section 4 and train it on the different amounts of data: manually labeled only, naive augmentation (adding in the raw distant data), and denoised augmentation.",
"We compare our model to Choi et al. (2018) as well as to BERT (Devlin et al., 2018), which we fine-tuned for this task.",
"We adapt our task to BERT by forming an input sequence [CLS] sentence [SEP] mention [SEP] and assign the segment embedding A to the sentence and B to the mention span.",
"4 Then, we take the output vector at the position of the [CLS] token (i.e., the first token) as the feature vector v , analogous to the usage for sentence pair classification tasks.",
"The BERT model is fine-tuned on the 2K manually annotated examples.",
"We use the pretrained BERT-Base, uncased model 5 with a step size of 2e-05 and batch size 32.",
"Results Table 1 compares the performance of these systems on the development set.",
"Our model with no augmentation already matches the system of Choi et al. (2018) with augmentation, and incorporating ELMo gives further gains on both precision and recall.",
"On top of this model, adding the distantly-annotated data lowers the performance; the loss function-based approach of (Choi et al., 2018) does not sufficiently mitigate the noise in this data.",
"However, denoising makes the distantly-annotated data useful, improving recall by a substantial margin especially in the general class.",
"A possible reason for this is that the relabeling function tends to add more general types given finer types.",
"BERT performs similarly to ELMo with denoised distant data.",
"As can be seen in the performance breakdown, BERT gains from improvements in recall in the fine class.",
"Table 2 shows the performance of all settings on the test set, with the same trend as the performance on the development set.",
"Our approach outperforms the concurrently-published Xiong et al. (2019); however, that work does not use ELMo.",
"Their improved model could be used for both de-4 We investigated several approaches, including taking the head word piece from the last layer and using that for classification (more closely analogous to what Devlin et al. (2018) did for NER), but found this one to work best.",
"Usage of Pretrained Representations Our model with ELMo trained on denoised data matches the performance of the BERT model.",
"We experimented with incorporating distant data (raw and denoised) in BERT, but the fragility of BERT made it hard to incorporate: training for longer generally caused performance to go down after a while, so the model cannot exploit large external data as effectively.",
"Devlin et al. (2018) prescribe training with a small batch size and very specific step sizes, and we found the model very sensitive to these hyperparameters, with only 2e-05 giving strong results.",
"The ELMo paradigm of incorporating these as features is much more flexible and modular in this setting.",
"Finally, we note that our approach could use BERT for denoising as well, but this did not work better than our current approach.",
"Adapting BERT to leverage distant data effectively is left for future work.",
"We now explicitly compare our denoising approach to several baselines.",
"For each denoising method, we create the denoised EL, HEAD, and EL & HEAD dataset and investigate performance on these datasets.",
"Any denoised dataset is combined with the 2K manually-annotated examples and used to train the final model.",
"functions in a non-learned way.",
"SYNONYMS ANDHYPERNYMS For each type observed in the distant data, we add its synonyms and hypernyms using WordNet (Miller, 1995).",
"This is motivated by the data construction process in Choi et al. (2018).",
"COMMONTYPEPAIRS We use type pair statistics in the manually labeled training data.",
"For each base type that we observe in a distant example, we add any type which is seen more than 90% of the time the base type occurs.",
"For instance, the type art is given at least 90% of the times the film type is present, so we automatically add art whenever film is observed.",
"OVERLAP We train a model on the manually-labeled data only, then run it on the distantly-labeled data.",
"If there is an intersection between the noisy types t and the predicted type t , we combine them and use as the expanded type t .",
"Inspired by tri-training (Zhou and Li, 2005), this approach adds obvious types but avoids doing so in cases where the model has likely made an error.",
"Results Table 3 compares the results on the development set.",
"We report the performance on each of the EL & HEAD, EL, and HEAD dataset.",
"On top of the baseline ORIGINAL , adding synonyms and hypernyms by consulting external knowledge does not improve the performance.",
"Expanding labels with the PAIR technique results in small gains over ORIGINAL .",
"OVERLAP is the most ef-EL & HEAD EL HEAD Type Denoising Method P R F1 P R F1 P R F1 RAWDATA 55.2 26.4 35.7 52.3 26.1 34.8 52.8 28.4 36.9 Heuristic Baselines SYNONYMS & HYPERNYMS 43.0 30.0 35.3 47.5 26.3 33.9 44.8 31.7 37.1 PAIR 50.2 29.0 36.8 49.6 27.0 35.0 50.6 31.2 38.6 OVERLAP 50.0 32.3 39.2 49.5 30.8 38.0 50.6 31.4 38.7 Proposed Approach FILTER 53.1 28.2 36.8 51.9 26.5 35.1 51.2 31.2 38.7 RELABEL 52.1 32.2 39.8 50.2 31.4 38.6 50.2 31.8 38.9 FILTER & RELABEL 50.7 33.1 40.1 52.7 30.5 38.7 50.7 32.1 39.3 Choi et al. (2018) 48.1 23.2 31.3 50.3 19.6 28.2 48.4 22.3 30.6 Table 3: Macro-averaged P/R/F1 on the dev set for the entity typing task of Choi et al. (2018) with various types of augmentation added.",
"fective heuristic technique.",
"This simple filtering and expansion heuristic improves recall on EL.",
"FILTER , our model-based example selector, gives similar improvements to PAIR and OVERLAP on the HEAD setting, where filtering noisy data appears to be somewhat important.",
"6 RELABEL and OVERLAP both improve performance on both EL and HEAD while other methods do poorly on EL.",
"Combining the two model-based denoising techniques, FILTER & RELABEL outperforms all the baselines.",
"We compare our different augmentation schemes for deriving data for the OntoNotes standard as well.",
"Table 4 lists the results on the OntoNotes test set following the adaptation setting of Choi et al. (2018).",
"Even on this dataset, denoising signifi-cantly improves over naive incorporation of distant data, showing that the denoising approach is not just learning quirks of the ultra-fine dataset.",
"Our augmented set is constructed from 2M seed examples while Choi et al. (2018) have a more complex procedure for deriving augmented data from 25M examples.",
"Ours (total size of 2.1M) is on par with their larger data (total size of 3.4M), despite having 40% fewer examples.",
"In this setting, BERT still performs well but not as well as our model with augmented training data.",
"One source of our improvements from data augmentation comes from additional data that is able to be used because some OntoNotes type can be derived.",
"This is due to denoising doing a better job 6 One possible reason for this is identifying stray word senses; film can refer to the physical photosensitive object, among other things.",
"of providing correct general types.",
"In the EL setting, this yields 730k usable examples out of 1M (vs 540K for no denoising), and in HEAD, 640K out of 1M (vs. 73K).",
"To understand what our denoising approach does to the distant data, we analyze the behavior of our filtering and relabeling functions.",
"Table 5 reports the average numbers of types added/deleted by the relabeling function and the ratio of examples discarded by the filtering function.",
"Overall, the relabeling function tends to add more and delete fewer number of types.",
"The HEAD examples have more general types added than the EL examples since the noisy HEAD labels are typically finer.",
"Fine-grained types are added to both EL and HEAD examples less frequently.",
"Ultra-fine examples are frequently added to both datasets, with more added to EL; the noisy EL labels are mostly extracted from Wikipedia defini-General Fine Ultra-Fine Data Add Del Add Del Add Del Filter (%) EL 0.87 0.01 0.36 0.17 2.03 0.12 9.4 HEAD 1.18 0.00 0.51 0.01 1.15 0.16 10.0 Table 5: The average number of types added or deleted by the relabeling function per example.",
"tions, so those labels often do not include ultra-fine types.",
"The filtering function discards similar numbers of examples for the EL and HEAD data: 9 .",
"4% and 10% respectively.",
"Figure 4 shows examples of the original noisy labels and the denoised labels produced by the relabeling function.",
"In example",
"(a), taken from the EL data, the original labels, { location, city } , are correct, but human annotators might choose more types for the mention span, Minneapolis .",
"The relabeling function retains the original types about the geography and adds ultra-fine types about administrative units such as { township, municipality } .",
"In example",
"(b), from the HEAD data, the original label, { dollar } , is not so expressive by itself since it is a name of a currency.",
"The labeling function adds coarse types, { object, currency } , as well as specific types such as { medium of exchange, monetary unit } .",
"In another EL example",
"(c), the relabeling function tries to add coarse and fine types but struggles to assign multiple diverse ultra-fine types to the mention span Michelangelo , possibly because some of these types rarely cooccur ( painter and poet ).",
"Past work on denoising data for entity typing has used multi-instance multi-label learning (Yaghoobzadeh and Schutze, 2015, 2017; Murty et al., 2018).",
"One view of these approaches is that they delete noisily-introduced labels, but they cannot add them, or filter bad examples.",
"Other work focuses on learning type embeddings (Yogatama et al., 2015; Ren et al., 2016a,b); our approach goes beyond this in treating the label set in a structured way.",
"The label set of Choi et al. (2018) is distinct in not being explicitly hierarchical, making past hierarchical approaches difficult to apply.",
"Denoising techniques for distant supervision have been applied extensively to relation extraction.",
"Here, multi-instance learning and probabilis-... play their home games at Target Center in [ Minneapolis ].",
"(a) location, city location, place, city, country, area, region, township, town, municipality ...",
"Vittoria was influenced also by [ Michelangelo ] ...",
"(c) architect, sculptor, painter, poet person, artist, writer [ The dollar ] has been rising , pushing commodities lower ... dollar",
"(b) object, currency, money, medium of exchange, dollar, monetary unit Figure 4: Examples of the noisy labels (left) and the denoised labels (right) for mentions (bold).",
"tic graphical modeling approaches have been used (Riedel et al., 2010; Hoffmann et al., 2011; Sur-deanu et al., 2012; Takamatsu et al., 2012) as well as deep models (Lin et al., 2016; Feng et al., 2017; Luo et al., 2017; Lei et al., 2018; Han et al., 2018), though these often focus on incorporating signals from other sources as opposed to manually labeled data.",
"In this work, we investigated the problem of denoising distant data for entity typing tasks.",
"We trained a filtering function that discards examples from the distantly labeled data that are wholly unusable and a relabeling function that repairs noisy labels for the retained examples.",
"When distant data is processed with our best denoising model, our final trained model achieves state-of-the-art performance on an ultra-fine entity typing task.",
"This work was partially supported by NSF Grant IIS-1814522, NSF Grant SHF-1762299, a Bloomberg Data Science Grant, and an equipment grant from NVIDIA.",
"The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research.",
"Results presented in this paper were obtained using the Chameleon testbed supported by the National Science Foundation.",
"Thanks as well to the anonymous reviewers for their thoughtful comments, members of the UT TAUR lab and Pengx-iang Cheng for helpful discussion, and Eunsol Choi for providing the full datasets and useful resources."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"result",
"other",
"other",
"other",
"other"
] |
[
"This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks.",
"We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of fil-tered CommonCrawl data.",
"Our model, dubbed XLM-R , significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6% average accuracy on XNLI, +13% average F1 score on MLQA, and +2.4% F1 score on NER.",
"XLM-R performs particularly well on low-resource languages, improving 15.7% in XNLI accuracy for Swahili and 11.4% for Urdu over previous XLM models.",
"We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale.",
"Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks.",
"We will make our code, data and models publicly available.",
"1 1 Introduction The goal of this paper is to improve cross-lingual language understanding (XLU), by carefully studying the effects of training unsupervised cross-lingual representations at a very large scale.",
"We present XLM-R a transformer-based multilingual masked language model pre-trained on text in 100 languages, which obtains state-of-the-art performance on cross-lingual classification, sequence labeling and question answering.",
"Multilingual masked language models (MLM) like mBERT (Devlin et al., 2018) and XLM (Lam-ple and Conneau, 2019) have pushed the state-of-the-art on cross-lingual understanding tasks by jointly pretraining large Transformer models (Vaswani et al., 2017) on many languages.",
"These models allow for effective cross-lingual transfer, as seen in a number of benchmarks including cross-lingual natural language inference (Bowman et al., 2015; Williams et al., 2017; Conneau et al., 2018), question answering (Rajpurkar et al., 2016; Lewis et al., 2019), and named entity recognition (Pires et al., 2019; Wu and Dredze, 2019).",
"However, all of these studies pre-train on Wikipedia, which provides a relatively limited scale especially for lower resource languages.",
"In this paper, we first present a comprehensive analysis of the trade-offs and limitations of multilingual language models at scale, inspired by re-cent monolingual scaling efforts (Liu et al., 2019).",
"We measure the trade-off between high-resource and low-resource languages and the impact of language sampling and vocabulary size.",
"The experiments expose a trade-off as we scale the number of languages for a fixed model capacity: more languages leads to better cross-lingual performance on low-resource languages up until a point, after which the overall performance on monolingual and cross-lingual benchmarks degrades.",
"We refer to this tradeoff as the curse of multilinguality , and show that it can be alleviated by simply increasing model capacity.",
"We argue, however, that this remains an important limitation for future XLU systems which may aim to improve performance with more modest computational budgets.",
"Our best model XLM-RoBERTa ( XLM-R ) outperforms mBERT on cross-lingual classification by up to 23% accuracy on low-resource languages.",
"It outperforms the previous state of the art by 5.1% average accuracy on XNLI, 2.42% average F1-score on Named Entity Recognition, and 9.1% average F1-score on cross-lingual Question Answering.",
"We also evaluate monolingual fine tuning on the GLUE and XNLI benchmarks, where XLM-R obtains results competitive with state-of-the-art monolingual models, including RoBERTa (Liu et al., 2019).",
"These results demonstrate, for the first time, that it is possible to have a single large model for all languages, without sacrificing per-language performance.",
"We will make our code, models and data publicly available, with the hope that this will help research in multilingual NLP and low-resource language understanding.",
"From pretrained word embeddings (Mikolov et al., 2013b; Pennington et al., 2014) to pretrained contextualized representations (Peters et al., 2018; Schuster et al., 2019) and transformer based language models (Radford et al., 2018; Devlin et al., 2018), unsupervised representation learning has significantly improved the state of the art in natural language understanding.",
"Parallel work on cross-lingual understanding (Mikolov et al., 2013a; Schuster et al., 2019; Lample and Conneau, 2019) extends these systems to more languages and to the cross-lingual setting in which a model is learned in one language and applied in other languages.",
"Most recently, Devlin et al. (2018) and Lample and Conneau (2019) introduced mBERT and XLM masked language models trained on multiple languages, without any cross-lingual supervision.",
"Lample and Conneau (2019) propose translation language modeling (TLM) as a way to leverage parallel data and obtain a new state of the art on the cross-lingual natural language inference (XNLI) benchmark (Conneau et al., 2018).",
"They further show strong improvements on unsupervised machine translation and pretraining for sequence generation.",
"Wu et al. (2019) shows that monolingual BERT representations are similar across languages, explaining in part the natural emergence of multilinguality in bottleneck architectures.",
"Separately, Pires et al. (2019) demonstrated the effectiveness of multilingual models like mBERT on sequence labeling tasks.",
"Huang et al. (2019) showed gains over XLM using cross-lingual multi-task learning, and Singh et al. (2019) demonstrated the efficiency of cross-lingual data augmentation for cross-lingual NLI.",
"However, all of this work was at a relatively modest scale, in terms of the amount of training data, as compared to our approach.",
"The benefits of scaling language model pretraining by increasing the size of the model as well as the training data has been extensively studied in the literature.",
"For the monolingual case, Jozefowicz et al. (2016) show how large-scale LSTM models can obtain much stronger performance on language modeling benchmarks when trained on billions of tokens.",
"GPT (Radford et al., 2018) also highlights the importance of scaling the amount of data and RoBERTa (Liu et al., 2019) shows that training BERT longer on more data leads to significant boost in performance.",
"Inspired by RoBERTa, we show that mBERT and XLM are undertuned, and that simple improvements in the learning procedure of unsupervised MLM leads to much better performance.",
"We train on cleaned CommonCrawls (Wen-zek et al., 2019), which increase the amount of data for low-resource languages by two orders of magnitude on average.",
"Similar data has also been shown to be effective for learning high quality word embeddings in multiple languages (Grave et al., 2018).",
"Several efforts have trained massively multilingual machine translation models from large parallel corpora.",
"They uncover the high and low resource trade-off and the problem of capacity dilution (Johnson et al., 2017; Tan et al., 2019).",
"The work most similar to ours is Arivazhagan et al. (2019), which trains a single model in 103 languages on over 25 billion parallel sentences.",
"Sid-dhant et al. (2019) further analyze the representations obtained by the encoder of a massively multilingual machine translation system and show that it obtains similar results to mBERT on cross-lingual NLI.",
"Our work, in contrast, focuses on the unsupervised learning of cross-lingual representations and their transfer to discriminative tasks.",
"In this section, we present the training objective, languages, and data we use.",
"We follow the XLM approach (Lample and Conneau, 2019) as closely as possible, only introducing changes that improve performance at scale.",
"Masked Language Models.",
"We use a Transformer model (Vaswani et al., 2017) trained with the multilingual MLM objective (Devlin et al., 2018; Lample and Conneau, 2019) using only monolingual data.",
"We sample streams of text from each language and train the model to predict the masked tokens in the input.",
"We apply subword toke n r u i dv i f a uk s v t h j a d e r ohubg frf i ko e s nop t e l z hd a p l h e it n l a r s kh i h r t r c s ltt aca s l k a s r l vbn m s m l az kk e t u r hy s q m k t e b e n e s ii s kn tl g l m n m r l ae ugu s w k m a f ky e o a m p ac yp s u z o r g a m yku s oug s a y i m g f y j vgdb r b s a ss u 10 -1 10 0 10 1 10 2 10 3 D a t a s e t s i ze ( i n GB ) CommonCrawl Wikipedia Figure 1: Amount of data in GiB (log-scale) for the 88 languages that appear in both the Wiki-100 corpus used for mBERT and XLM-100, and the CC-100 used for XLM-R.",
"enization directly on raw text data using Sentence Piece (Kudo and Richardson, 2018) with a unigram language model (Kudo, 2018).",
"We sample batches from different languages using the same sampling distribution as Lample and Conneau (2019), but with = 0 .",
"3 .",
"Unlike Lample and Conneau (2019), we do not use language embeddings, which allows our model to better deal with code-switching.",
"We use a large vocabulary size of 250K with a full softmax and train two different models: XLM-R Base (L = 12, H = 768, A = 12, 270M params) and XLM-R (L = 24, H = 1024, A = 16, 550M params).",
"For all of our ablation studies, we use a BERT Base architecture with a vocabulary of 150K tokens.",
"Appendix B goes into more details about the architecture of the different models referenced in this paper.",
"Scaling to a hundred languages.",
"XLM-R is trained on 100 languages; we provide a full list of languages and associated statistics in Appendix A. Figure 1 specifies the iso codes of 88 languages that are shared across XLM-R and XLM-100, the model from Lample and Conneau (2019) trained on Wikipedia text in 100 languages.",
"Compared to previous work, we replace some languages with more commonly used ones such as romanized Hindi and traditional Chinese.",
"In our ablation studies, we always include the 7 languages for which we have classification and sequence labeling evaluation benchmarks: English, French, German, Russian, Chinese, Swahili and Urdu.",
"We chose this set as it covers a suitable range of language families and includes low-resource languages such as Swahili and Urdu.",
"We also consider larger sets of 15, 30, 60 and all 100 languages.",
"When reporting results on high-resource and low-resource, we refer to the average of English and French results, and the average of Swahili and Urdu results respectively.",
"Scaling the Amount of Training Data.",
"Following Wenzek et al. (2019) 2 , we build a clean CommonCrawl Corpus in 100 languages.",
"We use an internal language identification model in combination with the one from fastText (Joulin et al., 2017).",
"We train language models in each language and use it to filter documents as described in Wenzek et al. (2019).",
"We consider one CommonCrawl dump for English and twelve dumps for all other languages, which significantly increases dataset sizes, especially for low-resource languages like Burmese and Swahili.",
"Figure 1 shows the difference in size between the Wikipedia Corpus used by mBERT and XLM-100, and the CommonCrawl Corpus we use.",
"As we show in Section 5.3, monolingual Wikipedia corpora are too small to enable unsupervised representation learning.",
"Based on our experiments, we found that a few hundred MiB of text data is usually a minimal size for learning a BERT model.",
"We consider four evaluation benchmarks.",
"For cross-lingual understanding, we use cross-lingual natural language inference, named entity recognition, and question answering.",
"We use the GLUE benchmark to evaluate the English performance of XLM-R and compare it to other state-of-the-art models.",
"Cross-lingual Natural Language Inference (XNLI).",
"The XNLI dataset comes with ground-truth dev and test sets in 15 languages, and a ground-truth English training set.",
"The training set has been machine-translated to the remaining 14 languages, providing synthetic training data for these languages as well.",
"We evaluate our model on cross-lingual transfer from English to other lan-2 https://github.com/facebookresearch/cc net guages.",
"We also consider three machine translation baselines:",
"(i) translate-test : dev and test sets are machine-translated to English and a single English model is used",
"(ii) translate-train (per-language): the English training set is machine-translated to each language and we fine-tune a multiligual model on each training set",
"(iii) translate-train-all (multi-language): we fine-tune a multilingual model on the concatenation of all training sets from translate-train.",
"For the translations, we use the official data provided by the XNLI project.",
"Named Entity Recognition.",
"For NER, we consider the CoNLL-2002 (Sang, 2002) and CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003) datasets in English, Dutch, Spanish and German.",
"We fine-tune multilingual models either (1) on the English set to evaluate cross-lingual transfer, (2) on each set to evaluate per-language performance, or (3) on all sets to evaluate multilingual learning.",
"We report the F1 score, and compare to baselines from Lample et al. (2016) and Akbik et al. (2018).",
"Cross-lingual Question Answering.",
"We use the MLQA benchmark from Lewis et al. (2019), which extends the English SQuAD benchmark to Spanish, German, Arabic, Hindi, Vietnamese and Chinese.",
"We report the F1 score as well as the exact match (EM) score for cross-lingual transfer from English.",
"GLUE Benchmark.",
"Finally, we evaluate the English performance of our model on the GLUE benchmark (Wang et al., 2018) which gathers multiple classification tasks, such as MNLI (Williams et al., 2017), SST-2 (Socher et al., 2013), or QNLI (Rajpurkar et al., 2018).",
"We use BERT Large and RoBERTa as baselines.",
"In this section, we perform a comprehensive analysis of multilingual masked language models.",
"We conduct most of the analysis on XNLI, which we found to be representative of our findings on other tasks.",
"We then present the results of XLM-R on cross-lingual understanding and GLUE.",
"Finally, we compare multilingual and monolingual models, and present results on low-resource languages.",
"Much of the work done on understanding the cross-lingual effectiveness of mBERT or XLM (Pires et al., 2019; Wu and Dredze, 2019; Lewis et al.,",
"2019) has focused on analyzing the performance of fixed pretrained models on downstream tasks.",
"In this section, we present a comprehensive study of different factors that are important to pretraining large scale multilingual models.",
"We highlight the trade-offs and limitations of these models as we scale to one hundred languages.",
"Multilinguality.",
"Model capacity (i.e. the number of parameters in the model) is constrained due to practical considerations such as memory and speed during training and inference.",
"For a fixed sized model, the per-language capacity decreases as we increase the number of languages.",
"While low-resource language performance can be improved by adding similar higher-resource languages during pretraining, the overall downstream performance suffers from this capacity dilution (Arivazhagan et al., 2019).",
"Positive transfer and capacity dilution have to be traded off against each other.",
"We illustrate this trade-off in Figure 2, which shows XNLI performance vs the number of languages the model is pretrained on.",
"Initially, as we go from 7 to 15 languages, the model is able to take advantage of positive transfer which improves performance, especially on low resource languages.",
"Beyond this point the curse of multilinguality kicks in and degrades performance across all languages.",
"Specifically, the overall XNLI accuracy decreases from 71.8% to 67.7% as we go from XLM-7 to XLM-100.",
"The same trend can be observed for models trained on the larger CommonCrawl Corpus.",
"The issue is even more prominent when the capacity of the model is small.",
"To show this, we pretrain models on Wikipedia Data in 7, 30 and 100 languages.",
"As we add more languages, we make the Transformer wider by increasing the hidden size from 768 to 960 to 1152.",
"In Figure 4, we show that the added capacity allows XLM-30 to be on par with XLM-7, thus overcoming the curse of multilinguality.",
"The added capacity for XLM-100, however, is not enough and it still lags behind due to higher vocabulary dilution (recall from Section 3 that we used a fixed vocabulary size of 150K for all models).",
"High-resource vs Low-resource Trade-off.",
"The allocation of the model capacity across languages is controlled by several parameters: the training set size, the size of the shared subword 7 15 30 60 100 Number of languages 40 50 60 70 80 A cc u r ac y Low res.",
"vocabulary, and the rate at which we sample training examples from each language.",
"We study the effect of sampling on the performance of high-resource (English and French) and low-resource (Swahili and Urdu) languages for an XLM-100 model trained on Wikipedia (we observe a similar trend for the construction of the subword vocab).",
"Specifically, we investigate the impact of varying the parameter which controls the exponential smoothing of the language sampling rate.",
"Similar to Lample and Conneau (2019), we use a sampling rate proportional to the number of sentences in each corpus.",
"Models trained with higher values of see batches of high-resource languages more often.",
"Figure 5 shows that the higher the value of , the better the performance on high-resource languages, and vice-versa.",
"When considering overall performance, we found 0 .",
"3 to be an optimal value for , and use this for XLM-R .",
"Importance of Capacity and Vocabulary.",
"In previous sections and in Figure 4, we showed the importance of scaling the model size as we increase the number of languages.",
"Similar to the overall model size, we argue that scaling the size of the shared vocabulary (the vocabulary capacity) can improve the performance of multilingual models on downstream tasks.",
"To illustrate this effect, we train XLM-100 models on Wikipedia data with different vocabulary sizes.",
"We keep the overall number of parameters constant by adjusting the width of the transformer.",
"Figure 6 shows that even with a fixed capacity, we observe a 2.8% increase in XNLI average accuracy as we increase the vocabulary size from 32K to 256K.",
"This suggests that multilingual models can benefit from allocating a higher proportion of the total number of parameters to the embedding layer even though this reduces the size of the Transformer.",
"For simplicity and given the softmax computational constraints, we use a vocabulary of 250k for XLM-R .",
"We further illustrate the importance of this parameter, by training three models with the same transformer architecture (BERT Base ) but with different vocabulary sizes: 128K, 256K and 512K.",
"We observe more than 3% gains in overall accuracy on XNLI by simply increasing the vocab size from 128k to 512k.",
"Larger-scale Datasets and Training.",
"As shown in Figure 1, the CommonCrawl Corpus that we collected has significantly more monolingual data than the previously used Wikipedia corpora.",
"Figure 3 shows that for the same BERT Base architecture, all models trained on CommonCrawl obtain significantly better performance.",
"Apart from scaling the training data, Liu et al. (2019) also showed the benefits of training MLMs longer.",
"In our experiments, we observed similar effects of large-scale training, such as increasing batch size (see Figure 7) and training time, on model performance.",
"Specifically, we found that using validation perplexity as a stopping criterion for pretraining caused the multilingual MLM in Lample and Conneau (2019) to be under-tuned.",
"In our experience, performance on downstream tasks continues to improve even after validation perplexity has plateaued.",
"Combining this observation with our implementation of the unsupervised XLM-MLM objective, we were able to improve the performance of Lample and Conneau (2019) from 71.3% to more than 75% average accuracy on XNLI, which was on par with their supervised translation language modeling (TLM) objective.",
"Based on these results, and given our focus on unsupervised learning, we decided to not use the supervised TLM objective for training our models.",
"Simplifying Multilingual Tokenization with Sentence Piece.",
"The different language-specific tokenization tools used by mBERT and XLM-100 make these models more difficult to use on raw text.",
"Instead, we train a Sentence Piece model (SPM) and apply it directly on raw text data for all languages.",
"We did not observe any loss in performance for models trained with SPM when compared to models trained with language-specific preprocessing and byte-pair encoding (see Figure 7) and hence use SPM for XLM-R .",
"Based on these results, we adapt the setting of Lample and Conneau (2019) and use a large Transformer model with 24 layers and 1024 hidden states, with a 250k vocabulary.",
"We use the multilingual MLM loss and train our XLM-R model for 1.5 Million updates on five-hundred 32GB Nvidia V100 GPUs with a batch size of 8192.",
"We leverage the SPM-preprocessed text data from CommonCrawl in 100 languages and sample languages with = 0 .",
"3 .",
"In this section, we show that it outperforms all previous techniques on cross-lingual benchmarks while getting performance on par with RoBERTa on the GLUE benchmark.",
"XNLI.",
"Table 1 shows XNLI results and adds some additional details:",
"(i) the number of models the approach induces (#M),",
"(ii) the data on which the model was trained (D), and",
"(iii) the number of languages the model was pretrained on (#lg).",
"As we show in our results, these parameters significantly impact performance.",
"Column #M specifies whether model selection was done separately on the dev set of each language ( N models), or on the joint dev set of all the languages (single model).",
"We observe a 0.6 decrease in overall accuracy when we go from N models to a single model going from 71.3 to 70.7.",
"We encourage the community to adopt this setting.",
"For cross-lingual transfer, while this approach is not fully zero-shot transfer, we argue that in real applications, a small amount of supervised data is often available for validation in each language.",
"XLM-R sets a new state of the art on XNLI.",
"On cross-lingual transfer, XLM-R obtains 80.9% accuracy, outperforming the XLM-100 and mBERT open-source models by 10.2% and 14.6% average accuracy.",
"On the Swahili and Urdu low-resource languages, XLM-R outperforms XLM-100 by 15.7% and 11.4%, and mBERT by 23.5% and 15.8%.",
"While XLM-R handles 100 languages, we also show that it outperforms the former state of the art Unicoder (Huang et al., 2019) and XLM (MLM+TLM), which handle only 15 languages, by 5.5% and 5.8% average accuracy respectively.",
"Using the multilingual training of translate-train-all, XLM-R further improves performance and reaches 83.6% accuracy, a new overall state of the art for XNLI, outperforming Unicoder by 5.1%.",
"Multilingual training is similar to practical applications where training sets are available in various languages for the same task.",
"In the case of XNLI, datasets have been translated, and translate-train-all can be seen as some form of cross-lingual data augmentation (Singh et al., 2019), similar to back-translation (Xie et al., 2019).",
"Named Entity Recognition.",
"In Table 2, we report results of XLM-R and mBERT on CoNLL-2002 and CoNLL-2003.",
"We consider the LSTM + CRF approach from Lample et al. (2016) and the Flair model from Akbik et al. (2018) as baselines.",
"We evaluate the performance of the model Model D #M #lg en fr es de el bg ru tr ar vi th zh hi sw ur Avg Fine-tune multilingual model on English training set (Cross-lingual Transfer) Lample and Conneau (2019) Wiki+MT N 15 85.0 78.7 78.9 77.8 76.6 77.4 75.3 72.5 73.1 76.1 73.2 76.5 69.6 68.4 67.3 75.1 Huang et al. (2019) Wiki+MT N 15 85.1 79.0 79.4 77.8 77.2 77.2 76.3 72.8 73.5 76.4 73.6 76.2 69.4 69.7 66.7 75.4 Devlin et al. (2018) Wiki N 102 82.1 73.8 74.3 71.1 66.4 68.9 69.0 61.6 64.9 69.5 55.8 69.3 60.0 50.4 58.0 66.3 Lample and Conneau (2019) Wiki N 100 83.7 76.2 76.6 73.7 72.4 73.0 72.1 68.1 68.4 72.0 68.2 71.5 64.5 58.0 62.4 71.3 Lample and Conneau (2019) Wiki 1 100 83.2 76.7 77.7 74.0 72.7 74.1 72.7 68.7 68.6 72.9 68.9 72.5 65.6 58.2 62.4 70.7 XLM-R Base CC 1 100 85.8 79.7 80.7 78.7 77.5 79.6 78.1 74.2 73.8 76.5 74.6 76.7 72.4 66.5 68.3 76.2 XLM-R CC 1 100 89.1 84.1 85.1 83.9 82.9 84.0 81.2 79.6 79.8 80.8 78.1 80.2 76.9 73.9 73.8 80.9 Translate everything to English and use English-only model (TRANSLATE-TEST) BERT-en Wiki 1 1 88.8 81.4 82.3 80.1 80.3 80.9 76.2 76.0 75.4 72.0 71.9 75.6 70.0 65.8 65.8 76.2 RoBERTa Wiki+CC 1 1 91.3 82.9 84.3 81.2 81.7 83.1 78.3 76.8 76.6 74.2 74.1 77.5 70.9 66.7 66.8 77.8 Fine-tune multilingual model on each training set (TRANSLATE-TRAIN) Lample and Conneau (2019) Wiki N 100 82.9 77.6 77.9 77.9 77.1 75.7 75.5 72.6 71.2 75.8 73.1 76.2 70.4 66.5 62.4 74.2 Fine-tune multilingual model on all training sets (TRANSLATE-TRAIN-ALL) Lample and Conneau (2019) Wiki+MT 1 15 85.0 80.8 81.3 80.3 79.1 80.9 78.3 75.6 77.6 78.5 76.0 79.5 72.9 72.8 68.5 77.8 Huang et al. (2019) Wiki+MT 1 15 85.6 81.1 82.3 80.9 79.5 81.4 79.7 76.8 78.2 77.9 77.1 80.5 73.4 73.8 69.6 78.5 Lample and Conneau (2019) Wiki 1 100 84.5 80.1 81.3 79.3 78.6 79.4 77.5 75.2 75.6 78.3 75.7 78.3 72.1 69.2 67.7 76.9 XLM-R Base CC 1 100 85.4 81.4 82.2 80.3 80.4 81.3 79.7 78.6 77.3 79.7 77.9 80.2 76.1 73.1 73.0 79.1 XLM-R CC 1 100 89.1 85.1 86.6 85.7 85.3 85.9 83.5 83.2 83.1 83.7 81.5 83.7 81.6 78.0 78.1 83.6 Table 1: Results on cross-lingual classification.",
"on each of the target languages in three different settings:",
"(i) train on English data only (en)",
"(ii) train on data in target language (each)",
"(iii) train on data in all languages (all).",
"Results of mBERT are reported from Wu and Dredze (2019).",
"Note that we do not use a linear-chain CRF on top of XLM-R and mBERT representations, which gives an advantage to Akbik et al. (2018).",
"Without the CRF, our XLM-R model still performs on par with the state of the art, outperforming Akbik et al. (2018) on Dutch by 2 .",
"09 points.",
"On this task, XLM-R also outperforms mBERT by 2.42 F1 on average for cross-lingual transfer, and 1.86 F1 when trained on each language.",
"Training on all languages leads to an average F1 score of 89.43%, outperforming cross-lingual transfer approach by 8.49%.",
"Question Answering.",
"We also obtain new state of the art results on the MLQA cross-lingual question answering benchmark, introduced by Lewis et al. (2019).",
"We follow their procedure by training on the English training data and evaluating on the 7 languages of the dataset.",
"We report results in Table 3.",
"XLM-R obtains F1 and accuracy scores of 70.7% and 52.7% while the previous state of the art was 61.6% and 43.5%.",
"XLM-R also outperforms mBERT by 13.0% F1-score and 11.1% accuracy.",
"It even outperforms BERT-Large on English, con-firming its strong monolingual performance.",
"In this section, we present results of multilingual XLM models against monolingual BERT models.",
"GLUE: XLM-R versus RoBERTa.",
"Our goal is to obtain a multilingual model with strong performance on both, cross-lingual understanding tasks as well as natural language understanding tasks for each language.",
"To that end, we evaluate XLM-R on the GLUE benchmark.",
"We show in Table 4, that XLM-R obtains better average dev performance than BERT Large by 1.6% and reaches performance on par with XLNet Large .",
"The RoBERTa model outperforms XLM-R by only 1.0% on average.",
"We believe future work can reduce this gap even further by alleviating the curse of multilinguality and Model train #lgs en es de ar hi vi zh Avg BERT-Large en 1 80.2 / 67.4 ---mBERT en 102 77.7 / 65.2 64.3 / 46.6 57.9 / 44.3 45.7 / 29.8 43.8 / 29.7 57.1 / 38.6 57.5 / 37.3 57.7 / 41.6 XLM-15 en 15 74.9 / 62.4 68.0 / 49.8 62.2 / 47.6 54.8 / 36.3 48.8 / 27.3 61.4 / 41.8 61.1 / 39.6 61.6 / 43.5 XLM-R Base en 100 77.1 / 64.6 67.4 / 49.6 60.9 / 46.7 54.9 / 36.6 59.4 / 42.9 64.5 / 44.7 61.8 / 39.3 63.7 / 46.3 XLM-R en 100 80.6 / 67.8 74.1 / 56.0 68.5 / 53.6 63.1 / 43.5 69.2 / 51.6 71.3 / 50.9 68.0 / 45.4 70.7 / 52.7 Table 3: Results on MLQA question answering We report the F1 and EM (exact match) scores for zero-shot classification where models are fine-tuned on the English Squad dataset and evaluated on the 7 languages of MLQA.",
"vocabulary dilution.",
"These results demonstrate the possibility of learning one model for many languages while maintaining strong performance on per-language downstream tasks.",
"XNLI: XLM versus BERT.",
"A recurrent criticism against multilingual models is that they obtain worse performance than their monolingual counterparts.",
"In addition to the comparison of XLM-R and RoBERTa, we provide the first comprehensive study to assess this claim on the XNLI benchmark.",
"We extend our comparison between multilingual XLM models and monolingual BERT models on 7 languages and compare performance in Table 5.",
"We train 14 monolingual BERT models on Wikipedia and CommonCrawl (capped at 60 GiB), and two XLM-7 models.",
"We increase the vocabulary size of the multilingual model for a better comparison.",
"We found that multilingual models can outperform their monolingual BERT counterparts .",
"Specifically, in Table 5, we show that for cross-lingual transfer, monolingual baselines outperform XLM-7 for both Wikipedia and CC by 1.6% and 1.3% average accuracy.",
"However, by making use of multilingual training (translate-train-all) and leveraging training sets coming from multiple languages, XLM-7 can outperform the BERT models: our XLM-7 trained on CC obtains 80.0% average accuracy on the 7 languages, while the average performance of BERT models trained on CC is 77.5%.",
"This is a surprising result that shows that the capacity of multilingual models to leverage training data coming from multiple languages for a particular task can overcome the capacity dilution problem to obtain better overall performance.",
"We observed in Table 5 that pretraining on Wikipedia for Swahili and Urdu performed similarly to a randomly initialized model; most likely due to the small size of the data for these languages.",
"On the other hand, pretraining on CC improved performance by up to 10 points.",
"This confirms our assumption that mBERT and XLM-100 rely heavily on cross-lingual transfer but do not model the low-resource languages as well as XLM-R .",
"Specifi-cally, in the translate-train-all setting, we observe that the biggest gains for XLM models trained on CC, compared to their Wikipedia counterparts, are on low-resource languages; 7% and 4.8% improvement on Swahili and Urdu respectively.",
"In this work, we introduced XLM-R , our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages.",
"We show that it provides strong gains over previous multilingual models like mBERT and XLM on classification, sequence labeling and question answering.",
"We exposed the limitations of multilingual MLMs, in particular by uncovering the high-resource versus low-resource trade-off, the curse of multilinguality and the importance of key hyperparameters.",
"We also expose the surprising effectiveness of multilingual models over monolingual models, and show strong improvements on low-resource languages."
] | [
"result",
"method",
"result",
"abstain",
"result",
"objective",
"abstain",
"objective",
"result",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"result",
"objective",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"objective",
"result"
] |
[
"Current sequence-to-sequence models are trained to minimize cross-entropy and use softmax to compute the locally normalized probabilities over target sequences.",
"While this setup has led to strong results in a variety of tasks, one unsatisfying aspect is its length bias: models give high scores to short, inadequate hypotheses and often make the empty string the argmaxthe so-called cat got your tongue problem.",
"Recently proposed entmax-based sparse sequence-to-sequence models present a possible solution, since they can shrink the search space by assigning zero probability to bad hypotheses, but their ability to handle word-level tasks with transformers has never been tested.",
"In this work, we show that entmax-based models effectively solve the cat got your tongue problem, removing a major source of model error for neural machine translation.",
"In addition, we generalize label smoothing, a critical regularization technique, to the broader family of Fenchel-Young losses, which includes both cross-entropy and the entmax losses.",
"Our resulting label-smoothed entmax loss models set a new state of the art on multilingual grapheme-to-phoneme conversion and deliver improvements and better calibration properties on cross-lingual morphological inflection and machine translation for 7 language pairs.",
"Sequence-to-sequence models ( seq2seq : Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017) have become a powerful and flexible tool for a variety of NLP tasks, including machine translation (MT), morphological inflection (MI; Faruqui et al., 2016), and grapheme-to-phoneme conversion (G2P; Yao and Zweig, 2015).",
"These models often perform well, but they have a bias that favors short hypotheses.",
"This bias is problematic: it has been pointed out as the cause (Koehn and Knowles, 2017; Yang et al., 2018; Murray and Chiang, 2018) of the beam search curse , in which increasing the width of beam search actually decreases performance on neural machine translation (NMT).",
"Further illustrating the severity of the problem, Stahlberg and Byrne (2019) showed that the highest-scoring target sequence in NMT is often the empty string, a phenomenon they dubbed the cat got your tongue problem.",
"These results are undesirable because they show that NMT models' performance depends on the search errors induced by a narrow beam.",
"It would be preferable for models to assign higher scores to good translations than to bad ones, rather than to depend on search errors to make up for model errors.",
"The most common way to alleviate this shortcoming is by altering the decoding objective (Wu et al., 2016; He et al., 2016; Yang et al., 2018; Meister et al., 2020a), but this does not address the underlying problem: the model overestimates the probability of implausible hypotheses.",
"Other solutions use alternate training strategies (Murray and Chiang, 2018; Shen et al., 2016), but it would be preferable not to change the training algorithm.",
"In this paper, we propose a solution based on sparse seq2seq models (Peters et al., 2019), which replace the output softmax (Bridle, 1990) with the entmax transformation.",
"Entmax, unlike softmax, can learn locally sparse distributions over the target vocabulary.",
"This allows a sparse model to shrink the search space : that is, it can learn to give inadequate hypotheses zero probability, instead of counting on beam search to prune them.",
"This has already been demonstrated for MI, where the set of possible hypotheses is often small enough to make beam search exact (Peters et al., 2019; Peters and Martins, 2019).",
"We extend this analysis to MT: although exact beam search is not possible for this large vocabulary task, we show that entmax models prune many inadequate hypotheses, effectively solving the cat got your tongue problem.",
"entmax is that it is not compatible with label smoothing (Szegedy et al., 2016), a useful regularization technique that is widely used for transformers (Vaswani et al., 2017).",
"We solve this problem by generalizing label smoothing from the cross-entropy loss to the wider class of Fenchel-Young losses (Blondel et al., 2020), which includes the entmax loss as a particular case.",
"We show that combining label smoothing with entmax loss improves results on both characterand word-level tasks while keeping the model sparse.",
"We note that, although label smoothing improves calibration, it also exacerbates the cat got your tongue problem regardless of loss function.",
"To sum up, we make the following contributions: 1 We show empirically that models trained with entmax loss rarely assign nonzero probability to the empty string, demonstrating that entmax loss is an elegant way to remove a major class of NMT model errors.",
"We generalize label smoothing from the cross-entropy loss to the wider class of Fenchel-Young losses, exhibiting a formulation for label smoothing which, to our knowledge, is novel.",
"We show that Fenchel-Young label smoothing with entmax loss is highly effective on both characterand word-level tasks.",
"Our technique allows us to set a new state of the art on the SIGMORPHON 2020 shared task for multilingual G2P (Gorman et al., 2020).",
"It also delivers improvements for crosslingual MI from SIGMORPHON 2019 (McCarthy et al., 2019) and for MT on IWSLT 2017 German English (Cettolo et al., 2017), KFTT Japanese English (Neubig, 2011), WMT 2016 Romanian English (Bojar et al., 2016), and WMT 2014 English German (Bojar et al., 2014) compared to smoothed and unsmoothed cross-entropy loss.",
"A seq2seq model learns a probability distribution p ( y | x ) over sequences y from a target vocabulary V , conditioned on a source sequence x .",
"This 1 Our code is available at https://github.com/ deep-spin/S7 .",
"where V is the Kleene closure of V .",
"This is an intractable problem; seq2seq models depend on heuristic search strategies, most commonly beam search (Reddy et al., 1977).",
"Most seq2seq models are locally normalized, with probabilities that decompose by the chain rule: p ( y | x ) = | y | (cid:89) i =1 p ( y i | x, y <i ) .",
"This factorization implies that the probability of a hypothesis being generated is monotonically nonincreasing in its length, which favors shorter sequences.",
"This phenomenon feeds the beam search curse because short hypotheses 2 are pruned from a narrow beam but survive a wider one.",
"The conditional distribution p ( y i | x, y <i ) is obtained by first computing a vector of scores (or logits) z = f ( x, y <i ) R | V | , where f is parameterized by a neural network, and then applying a transformation : R | V | (cid:52) | V | , which maps scores to the probability simplex (cid:52) | V | := { p R | V | : p 0 , (cid:107) p (cid:107) 1 = 1 } .",
"The usual choice for is softmax (Bridle, 1990), which returns strictly positive values, ensuring that all sequences V have nonzero probability.",
"Coupled with the short sequence bias, this causes significant model error.",
"Sparse seq2seq models.",
"In a sparse model, the output softmax is replaced by a transformation from the entmax family (Peters et al., 2019).",
"Like softmax, entmax transformations return a vector in the simplex and are differentiable (almost) everywhere.",
"However, unlike softmax, they are capable of producing sparse probability distributions .",
"Concretely, this is done by using the so-called exponential function (Tsallis, 1988) in place of the exponential, where 0 : exp ( v ) := (cid:40) [1 + (1 ) v ] 1 / (1 ) + , (cid:54) = 1 exp( v ) , = 1 .",
"The -exponential function converges to the regular exponential when 1 .",
"Entmax models assume that p ( y i | x, y <i ) results from an -entmax 2 We use hypothesis to mean any sequence that ends with the special end-of-sequence token.",
"transformation of the scores z , defined as [ entmax ( z )] y := exp 2 ( z y ( z )) , (4) where ( z ) is a constant which ensures normalization.",
"When = 1 , (4) turns to a regular exponential function and 1 ( z ) = log (cid:80) | V | y (cid:48) =1 exp( z y (cid:48) ) is the log-partition function, recovering softmax.",
"When = 2 , we recover sparsemax (Martins and Astudillo, 2016).",
"For { 1 .",
"5 , 2 } , fast algorithms to compute (4) are available which are almost as fast as evaluating softmax.",
"For other values of , slower bisection algorithms exist.",
"Entmax transformations are sparse for any > 1 , with higher values tending to produce sparser outputs.",
"This sparsity allows a model to assign exactly zero probability to implausible hypotheses.",
"For tasks where there is only one correct target sequence, this often allows the model to concentrate all probability mass into a small set of hypotheses, making search exact (Peters and Martins, 2019).",
"This is not possible for open-ended tasks like machine translation, but the model is still locally sparse, assigning zero probability to many hypotheses.",
"These hypotheses will never be selected at any beam width .",
"Fenchel-Young Losses.",
"Inspired by the softmax generalization above, Blondel et al. (2020) provided a tool for constructing a convex loss function.",
"Let : (cid:52) | V | R be a strictly convex regularizer which is symmetric, i.e. , ( p ) = ( p ) for any permutation and any p (cid:52) | V | .",
"3 Equipped with , we can define a regularized prediction function : R | V | (cid:52) | V | , with this form: ( z ) = argmax p (cid:52) | V | z (cid:62) p ( p ) (5) where z R | V | is the vector of label scores (logits) and : (cid:52) | V | R is a regularizer.",
"Equation 5 recovers both softmax and entmax with particular choices of : the negative Shannon entropy, ( p ) = (cid:80) y V p y log p y , recovers the variational form of softmax (Wainwright and Jordan, 2008), while the negative Tsallis entropy (Tsallis, 1988) with parameter , defined as ( p ) = (cid:40) 1 ( 1) (cid:16)(cid:80) y V p y 1 (cid:17) , if (cid:54) = 1 (cid:80) y V p y log p y , if = 1 , (6) 3 It is instructive to think of as a generalized negative entropy: for example, as shown in Blondel et al. (2020, Prop. 4), strict convexity and symmetry imply that is minimized by the uniform distribution.",
"For a more comprehensive treatment of Fenchel-Young losses, see the cited work.",
"recovers the -entmax transformation in (4), as shown by Peters et al. (2019).",
"Given the choice of , the Fenchel-Young loss function L is defined as L ( z ; q ) := ( z ) + ( q ) z (cid:62) q , (7) where q is a target distribution, most commonly a one-hot vector indicating the gold label, q = e y = [0 , . . . , 0 , 1 (cid:124)(cid:123)(cid:122)(cid:125) y -th entry , 0 , . . . , 0] , and is the convex conjugate of , defined variationally as: ( z ) := max p (cid:52) | V | z (cid:62) p ( p ) .",
"The name stems from the Fenchel-Young inequality, which states that the quantity (7) is nonnegative (Borwein and Lewis, 2010, Prop. 3.3.4).",
"When is the generalized negative entropy, the loss (7) becomes the Kullback-Leibler divergence between q and softmax ( z ) (KL divergence; Kullback and Leibler, 1951), which equals the cross-entropy when q is a one-hot vector.",
"More generally, if is the negative Tsallis entropy (6), we obtain the -entmax loss (Peters et al., 2019).",
"Fenchel-Young losses have nice properties for training neural networks with backpropagation: they are non-negative, convex, and differentiable as long as is strictly convex (Blondel et al., 2020, Prop. 2).",
"Their gradient is z L ( z ; q ) = ( z ) q , (9) which generalizes the gradient of the cross-entropy loss.",
"Figure 1 illustrates particular cases of Fenchel-Young losses considered in this paper.",
"Label smoothing (Szegedy et al., 2016) has become a popular technique for regularizing the output of",
"a neural network.",
"The intuition behind it is that using the gold target labels from the training set can lead to overconfident models.",
"To overcome this, label smoothing redistributes probability mass from the gold label to the other target labels.",
"When the redistribution is uniform, Pereyra et al. (2017) and Meister et al. (2020b) pointed out that this is equivalent (up to scaling and adding a constant) to adding a second term to the loss that computes the KL divergence DKL ( u (cid:107) p ) between a uniform distribution u and the model distribution p .",
"While it might seem appealing to add a similar KL regularizer to a Fenchel-Young loss, this is not possible when p contains zeroes because the KL divergence term becomes infinite.",
"This makes vanilla label smoothing incompatible with sparse models .",
"Fortunately, there is a more natural generalization of label smoothing to Fenchel-Young losses.",
"For (cid:15) [0 , 1] , we define the Fenchel-Young label smoothing loss as follows: L ,(cid:15) ( z , e y ) := L ( z , (1 (cid:15) ) e y + (cid:15) u ) .",
"The intuition is the same as in cross-entropy label smoothing: the target one-hot vector is mixed with a uniform distribution.",
"This simple definition leads to the following result, proved in Appendix A: Proposition 1.",
"The Fenchel-Young label smoothing loss can be written as L ,(cid:15) ( z , e y ) = L ( z , e y )+ (cid:15) ( z y z )+ C, (11) where C = ( e y ) + ((1 (cid:15) ) e y + (cid:15) u ) is a constant which does not depend on z , and z := u (cid:62) z is the average of the logits.",
"Furthermore, up to a constant, we also have L ,(cid:15) ( z , e y ) L ( z , e y ) + L ( z , u ) , (12) where = (cid:15) 1 (cid:15) .",
"The first expression (11) shows that, up to a constant, the smoothed Fenchel-Young loss equals the original loss plus a linear regularizer (cid:15) ( z y z ) .",
"While this regularizer can be positive or negative, we show in Appendix A that its sum with the original loss L ( z , e y ) is always non-negative intuitively, if the score z y is below the average, resulting in negative regularization, the unregularized loss will also be larger, and the two terms balance each other.",
"Figure 2 shows the effect of this regularization in the graph of the loss we see that a correct prediction is linearly penalized with a slope of (cid:15) ; the larger the confidence, the larger the penalty.",
"In particular, when is the Shannon negentropy, this result shows a simple expression for vanilla label smoothing which, to the best of our knowledge, is novel.",
"The second expression (12) shows that it can also be seen as a form of regularization towards the uniform distribution.",
"When is the Shannon entropy, the regularizer becomes a KL divergence and we obtain the interpretation of label smoothing for cross-entropy provided by Pereyra et al. (2017) and Meister et al. (2020b).",
"Therefore, the same interpretation holds for the entire Fenchel-Young family if the regularization uses the corresponding Fenchel-Young loss with respect to a uniform.",
"Gradient of Fenchel-Young smoothed loss.",
"From Prop.",
"1 and Equation 9, we immediately obtain the following expression for the gradient of the smoothed loss: z L ,(cid:15) ( z , e y ) = = z L ( z , e y ) + (cid:15) ( e y u ) = ( z ) (1 (cid:15) ) e y (cid:15) u , (13) that is, the computation of this gradient is straightforward by adding a constant vector to the original gradient of the Fenchel-Young loss; as the latter, it only requires the ability of computing the transformation, which is efficient in the entmax case as shown by Peters et al. (2019).",
"Note that, unlike the gradient of the original entmax loss, the gradient of its smoothed version is not sparse (in the sense that it will not contain many zeroes); however, since u is the uniform distribution, it will contain many constant terms with value (cid:15)/ | V | .",
"We trained seq2seq models for three tasks: multilingual G2P, crosslingual MI, and MT. These tasks present very different challenges.",
"In G2P and MI, character-level vocabularies are small and there is usually only one correct target sequence.",
"The relative simplicity of these tasks is offset by the small quantity of training data and the strict evaluation: the model must produce exactly the right sequence.",
"This tests Fenchel-Young label smoothing's ability to learn exactly in a low-resource setting.",
"On the other hand, MT is trained with much larger corpora and evaluated with less strict metrics, but uses subword vocabularies with sizes in the tens of thousands and has to manage more ambiguity because sentences typically have many correct translations.",
"Entmax Loss : this influences the sparsity of the probability distributions the model returns, with = 1 recovering cross-entropy and larger values encouraging sparser distributions.",
"We use { 1 , 1 .",
"5 , 2 } for G2P and MI, and { 1 , 1 .",
"5 } for MT. Fenchel-Young Label Smoothing (cid:15) : higher values give more weight to the uniform smoothing distribution, discouraging sparsity.",
"We use (cid:15) { 0 , 0 .",
"01 , 0 .",
"02 , . . . , 0 .",
"15 } for G2P, (cid:15) { 0 , 0 .",
"01 , 0 .",
"05 , 0 .",
"1 } for MI, and (cid:15) { 0 , 0 .",
"01 , 0 .",
"1 } for MT. We trained all models with early stopping for a maximum of 30 epochs for MI, 15 epochs for WMT 2014 English German MT, and 100 epochs otherwise, keeping the best checkpoint according to a task-specific validation metric: Phoneme Error Rate for G2P, average Levenshtein distance for MI, and detokenized BLEU score for MT. At test time, we decoded with a beam width of 5.",
"Our PyTorch code (Paszke et al., 2017) is based on JoeyNMT (Kreutzer et al., 2019) and the entmax implementation from the entmax package.",
"4 4.1 Multilingual G2P Data.",
"We use the data from SIGMORPHON 2020 Task 1 (Gorman et al., 2020), which includes 3600 training examples in each of 15 languages.",
"We train a single multilingual model (following Peters and Martins, 2020) which must learn to apply spelling rules from several writing systems.",
"Results.",
"Multilingual G2P results are shown in Table 1, along with the best previous result (Yu et al., 2020).",
"We report two error metrics, each of which is computed per-language and averaged: Word Error Rate (WER) is the percentage of hypotheses which do not exactly match the reference.",
"This harsh metric gives no credit for partial matches.",
"Phoneme Error Rate (PER) is the sum of Levenshtein distances between each hypothesis and the corresponding reference, divided by the total length of the references.",
"These results show that the benefits of sparse losses and label smoothing can be combined.",
"Individually, both label smoothing and sparse loss functions ( > 1 ) consistently improve over unsmoothed cross-entropy ( = 1 ).",
"Together, they produce the best reported result on this dataset.",
"Our approach is very simple, as it requires manipulating only the loss function: there are no changes to the standard seq2seq training or decoding algorithms, no language-specific training or tuning, and no external auxiliary data.",
"In contrast, the previous state of the art (Yu et al., 2020) relies on a complex self-training procedure in which a genetic algorithm is used to learn to ensemble several base models.",
"Data.",
"Our data come from SIGMORPHON 2019 Task 1 (McCarthy et al., 2019), which includes datasets for 100 language pairs.",
"Each training set combines roughly 10,000 examples from a high resource language with 100 examples from a (sim-ulated) low resource language.",
"5 Development and test sets only cover the low resource language.",
"Training.",
"We reimplemented GATEDATTN (Pe-ters and Martins, 2019), an RNN model with separate encoders for lemma and morphological tags.",
"We copied their hyperparameters, except that we used two layers for all encoders.",
"We concatenated the high and low resource training data.",
"In order to make sure the model paid attention to the low resource training data, we either oversampled it 100 times or used data hallucination (Anastasopoulos and Neubig, 2019) to generate synthetic examples.",
"Hallucination worked well for some languages but not others, so we treated it as a hyperparameter.",
"Results.",
"We compare to CMU-03 6 (Anasta-sopoulos and Neubig, 2019), a two-encoder model with a sophisticated multi-stage training schedule.",
"Despite our models' simpler training technique, 6 We specifically use the official task numbers from McCarthy et al. (2019), which are more complete than those reported in Anastasopoulos and Neubig (2019).",
"they performed nearly as well in terms of accuracy, while recording, to our knowledge, the best Levenshtein distance on this dataset.",
"Having shown the effectiveness of our technique on character-level tasks, we next turn to MT. To our knowledge, entmax loss has never been used for transformer-based MT; Correia et al. (2019) used entmax only for transformer attention.",
"Data.",
"We made use of these language pairs: IWSLT 2017 German English ( DE EN , Cettolo et al., 2017): 200k training examples.",
"KFTT Japanese English ( JA EN , Neubig, 2011): 330k training examples.",
"WMT 2016 Romanian English ( RO EN , Bojar et al., 2016): 610k training examples.",
"WMT 2014 English German ( WMT 14, Bojar et al., 2014): 4.4 million training examples.",
"We used joint BPE (Sennrich et al., 2016) for all language pairs, 7 with 25,000 merges for WMT 14 and 32,000 merges for all other pairs.",
"Training.",
"We trained transformers with the base dimension and layer settings (Vaswani et al., 2017).",
"We optimized with Adam (Kingma and Ba, 2015) and used Noam scheduling with 20,000 warmup 7 Although English and Japanese have different writing systems, we still found it beneficial to use joint BPE for JA EN because many subwords occur in both the English and Japanese training corpora.",
"These include many named entities, which are often written with the native form alongside the transliteration.",
"Results.",
"Table 3 reports our models' performance in terms of untokenized BLEU (Papineni et al., 2002), which we computed with SacreBLEU (Post, 2018).",
"The results show a clear advantage for label smoothing and entmax loss, both separately and together: label-smoothed entmax loss is the best-performing configuration on 3 out of 7 language pairs, unsmoothed entmax loss performs best on another 3 out of 7, and they tie on the remaining one.",
"Although label-smoothed cross-entropy is seen as an essential ingredient for transformer training, entmax loss models beat it even without label smoothing for every pair except EN (cid:1) DE .",
"Model error.",
"Stahlberg and Byrne (2019) showed that the bias in favor of short strings is so strong for softmax NMT models that the argmax sequence is usually the empty string.",
"However, they did not consider the impact of sparsity or label smoothing.",
"8 We show in Table 4 how often the empty string is more probable than the beam search hypothesis.",
"This is an upper bound for how often the empty string is the argmax because there can also be other hypotheses that are more probable than the empty string.",
"The results show that and (cid:15) both matter: sparsity substantially reduces the frequency with which the empty string is more probable than the beam search hypothesis, while label smoothing usually increases it.",
"Outcomes vary widely with = 1 .",
"5 and (cid:15) = 0 .",
"1 : WMT 14 and DE EN models did not seriously suffer from the problem, EN (cid:1) RO did, and the other three language pairs differed from one run to another.",
"The optimal label smoothing value with cross-entropy is invariably (cid:15) = 0 .",
"1 , which encourages the cat got your tongue problem; on the other hand, entmax 8 They trained with transformer-base settings, implying label smoothing, and did not compare to unsmoothed losses.",
"Other inadequate strings.",
"Even if a model rules out the empty string, it might assign nonzero probability to other short, inadequate strings.",
"We investigated this with a depth-limited search inspired by Stahlberg and Byrne (2019)'s exact decoding technique.",
"Unfortunately, the algorithm's exponential runtime made it unfeasible to perform the search for all language pairs, and in particular we found it too slow for the dense search space of cross entropy models, even after applying various optimizations.",
"9 Therefore, we show results for EN (cid:1) RO entmax loss models in Table 5.",
"These results show the same trend as on the empty string: short strings are usually pruned for entmax loss models with (cid:15) = 0 or (cid:15) = 0 .",
"01 , but are likely to have a higher score than the beam-decoded hypothesis with (cid:15) = 0 .",
"1 .",
"Label smoothing and sparsity.",
"Peters et al. (2019) previously showed that RNN models trained with entmax loss become locally very sparse.",
"Table 6 shows that this is true of transformers as well.",
"Label smoothing encourages greater density, although for the densest language pair ( WMT",
"14) this only equates to an average support size of roughly 3300 out of a vocabulary of almost 30,000 word types.",
"The relationship between density and overestimating the empty string is inconsistent with (cid:15) = 0 .",
"1 : WMT 14 and DE EN models become much more dense but rarely overestimate the empty string (Table 4).",
"The opposite occurs for RO EN : models with (cid:15) = 0 .",
"1 become only slightly more dense but are much more prone to model error.",
"This suggests that corpus-specific factors influence both sparsity and how easily bad hypotheses can be pruned.",
"9 Specifically, when we pushed target sequences to the search's internal stack, we ordered them so that those ending in the end-of-sequence symbol would be popped first.",
"We also discarded any sequence whose probability was lower than that of the beam-decoded hypothesis.",
"Calibration.",
"This is the degree to which a model's confidence about its predictions ( i.e. class probabilities) accurately measure how likely those predictions are to be correct.",
"It has been shown (Muller et al., 2019; Kumar and Sarawagi, 2019) that label smoothing improves the calibration of seq2seq models.",
"We computed the Expected Calibration Error (ECE; Naeini et al., 2015) 10 of our MT models and confirmed their findings.",
"Our results, in Table 7, also show that sparse models are better calibrated than their dense counterparts.",
"This shows that entmax models do not become overconfident even though probability mass is usually concentrated in a small set of possibilities.",
"The good calibration of label smoothing may seem surprising in light of Table 4, which shows that label-smoothed models overestimate the probability of inadequate hypotheses.",
"However, ECE depends only on the relationship between model accuracy and the score of the most likely label.",
"This shows the tradeoff: larger (cid:15) values limit overconfidence but make the tail heavier.",
"Setting = 1 .",
"5 with a moderate (cid:15) value seems to be a sensible balance.",
"10 ECE = (cid:80) Mm =1 | B m | N | acc ( B m ) conf ( B m ) | partitions the model's N force-decoded predictions into M evenly-spaced bins and computes the difference between the accuracy ( acc ( B m ) ) and the average probability of the most likely prediction ( conf ( B m ) ) within that bin.",
"We use M = 10 .",
"Label smoothing.",
"Our work fits into a larger family of techniques that penalize model overconfidence.",
"Pereyra et al. (2017) proposed the confidence penalty, which reverses the direction of the KL divergence in the smoothing expression.",
"Meister et al. (2020b) then introduced a parameterized family of generalized smoothing techniques, different from Fenchel-Young Label Smoothing, which recovers vanilla label smoothing and the confidence penalty as special cases.",
"In a different direction, Wang et al. (2020) improved inference calibration with a graduated label smoothing technique that uses larger smoothing weights for predictions that a baseline model is more confident of.",
"Other works have smoothed over sequences instead of tokens (Norouzi et al., 2016; Elbayad et al., 2018; Lukasik et al., 2020), but this requires approximate techniques for deciding which sequences to smooth.",
"MAP decoding and the empty string.",
"We showed that sparse distributions suffer less from the cat got your tongue problem than their dense counterparts.",
"This makes sense in light of the find-ing that exact MAP decoding works for MI, where probabilities are very peaked even with softmax (Forster and Meister, 2020).",
"For tasks like MT, this is not the case: Eikema and Aziz (2020) pointed out that the argmax receives so little mass that it is almost arbitrary, so seeking it with MAP decoding (which beam search approximates) itself causes many deficiencies of decoding.",
"On the other hand, Meister et al. (2020a) showed that beam search has a helpful bias and introduced regularization penalties for MAP decoding that encode it explicitly.",
"Entmax neither directly addresses the faults of MAP decoding nor compensates for the locality biases of beam search, instead shrinking the gap between beam search and exact decoding.",
"It would be interesting, however, to experiment with these two approaches with entmax in place of softmax.",
"We generalized label smoothing from cross-entropy to the wider class of Fenchel-Young losses.",
"When combined with the entmax loss, we showed meaningful gains on character and word-level tasks, including a new state of the art on multilingual G2P.",
"In addition, we showed that the ability of entmax to shrink the search space significantly alleviates the cat got your tongue problem in machine translation, while also improving model calibration.",
"This work was supported by the European Research Council (ERC StG DeepSPIN 758969), by the P2020 programs MAIA and Unbabel4EU (LISBOA-01-0247-FEDER-045909 and LISBOA-01-0247-FEDER-042671), and by the Fundacao para a Ciencia e Tecnologia through contract UIDB/50008/2020.",
"We thank Wilker Aziz, Vlad Niculae, and the anonymous reviewers, for their helpful discussion and feedback."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"result",
"objective",
"objective",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"result",
"other",
"other"
] |
[
"Recent work in the field of automatic summarization and headline generation focuses on maximizing ROUGE scores for various news datasets.",
"We present an alternative, extrinsic, evaluation metric for this task, A nswering P erformance for E valuation of S ummaries .",
"APES utilizes recent progress in the field of reading-comprehension to quantify the ability of a summary to answer a set of manually created questions regarding central entities in the source article.",
"We first analyze the strength of this metric by comparing it to known manual evaluation metrics.",
"We then present an end-to-end neural abstractive model that maximizes APES, while increasing ROUGE scores to competitive results.",
"The task of automatic text summarization aims to produce a concise version of a source document while preserving its central information.",
"Current summarization models are divided into two approaches, extractive and abstractive .",
"In extractive summarization, summaries are created by selecting a collection of key sentences from the source document ( e.g. , Nallapati et al. (2017); Narayan et al. (2018)).",
"Abstractive summarization, on the other hand, aims to rephrase and compress the input text in order to create the summary.",
"Progress in sequence-to-sequence models (Sutskever et al., 2014) has led to recent success in abstractive summarization models.",
"Current models (Nallapati et al., 2016; See et al., 2017; Paulus et al., 2017; Celikyilmaz et al., 2018) made various adjustments to sequence-to-sequence models to gain improvements in ROUGE (Lin, 2004) scores.",
"ROUGE has achieved its status as the most common method for summaries evaluation by showing high correlation to manual evaluation methods, e.g. , the Pyramid method (Nenkova See et al. (2017)'s Summary: bolton will offer new contracts to emile heskey, 37, eidur gudjohnsen, 36, and adam bogdan, 27.",
"heskey and gudjohnsen joined on short-term deals in december.",
"eidur gudjohnsen has scored five times in the championship .",
"APES score: 0.33 Baseline Model Summary (Encoder / Decoder / Attention / Copy / Coverage): bolton will offer new contracts to emile heskey, 37, eidur gudjohnsen, 36, and goalkeeper adam bogdan, 27.",
"heskey and gudjohnsen joined on short-term deals in december, and have helped neil lennon 's side steer clear of relegation.",
"eidur gudjohnsen has scored five times in the championship, as well as once in the cup this season .",
"APES score: 0.33 Our Model (APES optimization): bolton will offer new contracts to emile heskey, 37, eidur gudjohnsen, 36, and goalkeeper adam bogdan, 27.",
"heskey joined on short-term deals in december, and have helped neil lennon 's side steer clear of relegation.",
"eidur gudjohnsen has scored five times in the championship, as well as once in the cup this season.",
"lennon has also fined mid-fielders barry bannan and neil danns two weeks wages this week.",
"both players have apologised to lennon .",
"APES score: 1.00 Questions from the CNN/Daily Mail Dataset: Q : goalkeeper also rewarded with new contract; A : adam bogdan Q : and neil danns both fined by club after drinking incident; A : barry bannan Q : barry bannan and both fined by club after drinking incident; A : neil danns Figure 1: Example 3083 from the test set.",
"et al., 2007).",
"Tasks like TAC AESOP (Owczarzak and Dang, 2011) used ROUGE as a strong baseline and confirmed the correlation of ROUGE with manual evaluation.",
"While it has been shown that ROUGE is correlated to Pyramid, Louis and Nenkova (2013) show that this summary level correlation decreases significantly when only a single reference is given.",
"In contrast to the smaller manually curated DUC datasets used in the past, more recent large-scale summarization and headline generation datasets ( CNN/Daily Mail (Hermann et al., 2015), Giga-word (Graff et al., 2003), New York Times (Sand-haus, 2008)) provide only a single reference summary for each source document.",
"In this work, we introduce a new automatic evaluation metric more suitable for such single reference news article datasets.",
"We define APES, A nswering P erformance for E valuation of S ummaries , a new metric for automatically evaluating summarization systems by querying summaries with a set of questions central to the input document (see Fig. 1).",
"Reducing the task of summaries evaluation to an extrinsic task such as question answering is intuitively appealing.",
"This reduction, however, is effective only under specific settings: (1) Availability of questions focusing on central information and (2) availability of a reliable question answering (QA) model.",
"Concerning issue 1, questions focusing on salient entities can be available as part of the dataset: the headline generation dataset most used in recent years, the CNN/Daily Mail dataset (Her-mann et al., 2015), was constructed by creating questions about entities that appear in the reference summary.",
"Since the target summary contains salient information from the source document, we consider all entities appearing in the target summary as salient entities .",
"In other cases, salient questions can be generated in an automated manner, as we discuss below.",
"Concerning issue 2, we focus on a relatively easy type of questions: given source documents and associated questions, a QA system can be trained over fill-in-the-blank type questions as was shown in Hermann et al. (2015) and Chen et al. (2016).",
"In their work, Chen et al. (2016) achieve ceiling performance' for the QA task on the CNN/Daily Mail dataset.",
"We empirically assess in our work whether this performance level (accu-racy of 72.4 and 75.8 over CNN and Daily Mail respectively) makes our evaluation scheme feasible and well correlated with manual summary evaluation.",
"Given the availability of salient questions and automatic QA systems, we propose APES as an evaluation metric for news article datasets, the most popular summarization genre in recent years.",
"To measure the APES metric of a candidate summary, we run a trained QA system with the summary as input alongside a set of questions associated with the source document.",
"The APES metric for a summarization model is the percentage of questions that were answered correctly over the whole dataset, as depicted in Fig. 2. We leave Figure 2: Evaluation flow of APES.",
"the task of extending this method to other genres for future work.",
"Our contributions in this work are: (1) We first present APES, a new extrinsic summarization evaluation metric; (2) We show APES strength through an analysis of its correlation with Pyramid and Responsiveness manual metrics; (3) we present a new abstractive model which maximizes APES by increasing attention scores of salient entities, while increasing ROUGE to competitive level.",
"We make two software packages available online:",
"(a) An evaluation library which receives the same input as ROUGE and produces both APES and ROUGE scores.",
"1",
"(b) Our PyTorch (Paszke et al., 2017) based summarizer that optimizes APES scores together with trained models.",
"2 2 Related Work 2.1 Evaluation Methods Automatic evaluation metrics of summarization methods can be categorized into either intrinsic or extrinsic metrics.",
"Intrinsic metrics measure a summary's quality by measuring its similarity to a manually produced target gold summary or by inspecting properties of the summary.",
"Examples of such metrics include ROUGE (Lin, 2004), Basic Elements (Hovy et al., 2006) and Pyramid (Nenkova et al., 2007).",
"Alternatively, extrinsic metrics test the ability of a summary to support performing related tasks and compare the performance of humans or systems when completing a task that requires understanding the source document (Steinberger and Jezek, 2012).",
"Such extrinsic tasks may include text categorization, infor-1 www.github.com/mataney/APES 2 www.github.com/mataney/APES-optimizer mation retrieval, question answering (Jing et al., 1998) or assessing the relevance of a document to a query (Hobson et al., 2007).",
"ROUGE, or Recall-Oriented Understudy for Gisting Evaluation (Lin, 2004), refers to a set of automatic intrinsic metrics for evaluating automatic summaries.",
"ROUGE-N scores a candidate summary by counting the number of N-gram overlaps between the automatic summary and the reference summaries.",
"Other notable metrics from this family are ROUGE-L, where scores are given by the Longest Common Subsequence (LCS) between the suggested and reference documents, and ROUGE-SU4, which uses skip-bigram, a more flexible method for computing the overlap of bi-grams.",
"The Pyramid method (Nenkova et al., 2007) is a manual evaluation metric that analyzes multiple human-made summaries into Summary Content Units (SCUs) and assigns importance weights to each SCU.",
"Different summaries are scored by assessing the extent to which they convey SCUs according to their respective weights.",
"Pyramid is most effective when multiple human-made summaries alongside manual intervention to detect SCUs in source and target documents.",
"The Basic Elements method (Hovy et al., 2006), an automated procedure for finding short fragments of content, has been suggested to automate a method related to Pyramid.",
"Like Pyramid, this method requires multiple human-made gold summaries, making this method expensive in time and cost.",
"Responsiveness (Dang, 2005), another manual metric is a measure of overall quality combining both content selection, like Pyramid, and linguistic quality.",
"Both Pyramid and Responsiveness are the standard manual approaches for content evaluation of summaries.",
"Automated Pyramid evaluation has been attempted in the past (Owczarzak, 2009; Yang et al., 2016; Hirao et al., 2018).",
"This task is complex because it requires (1) identifying SCUs in a text, which requires syntactic parsing and the extraction of key subtrees from the identified units, and (2) the clustering of these extracted textual elements into semantically similar SCUs.",
"These two operations are noisy, and the compounded performance summary evaluation is relying on noisy intermediary representation accordingly suffers.",
"Other relevant quantities for summaries quality assessment include: readability (or fluency), grammaticality, coherence and structure, focus, referential clarity, and non-redundancy.",
"Although some automatic methods were suggested as summarization evaluation metrics (Vadlapudi and Ka-tragadda, 2010; Tay et al., 2017), these metrics are commonly assessed manually, and, therefore, rarely reported as part of experiments.",
"Our proposed evaluation method, APES, attempts to capture the capability of a summary to enable readers to answer questions similar to the manual task initially discussed in Jing et al. (1998) and recently reported in Narayan et al. (2018).",
"Our contribution consists of automating this method and assessing the feasibility of the resulting approximation.",
"The first paper to use an end-to-end neural network for the summarization task was Rush et al. (2015): this work is based on a sequence-to-sequence model (Sutskever et al., 2014) augmented with an attention mechanism (Bahdanau et al., 2014).",
"Nallapati et al. (2016) was the first to tackle the headline generation problem using the CNN/Daily Mail dataset (Hermann et al., 2015) adopted for the summarization task.",
"See et al. (2017) followed the work of Nallapati et al. (2016) and added an additional loss term to reduce repetitions at decoding time.",
"Paulus et al. (2017) introduces intra-attention in order to attend over both the input and previously generated outputs.",
"The authors also present a hybrid learning objective designed to maximize ROUGE scores using Reinforcement Learning.",
"All the papers mentioned above have been evaluated using ROUGE, and all, except for Rush et al. (2015), used CNN/Daily Mail as their main headline generation dataset.",
"Of all the mentioned models we compare our suggested model only to (See et al., 2017), as it is the only paper to publish output summaries.",
"Evaluating a summarization system with APES applies the following method: APES receives a set of news articles summaries, question-and-answer pairs referring to central information from the text and an automatic QA system.",
"Then, APES uses this QA system to determine the total number of questions answered correctly according to the re-Original Reference Summary: Arsenal beat Burnley 1-0 in the EPL.",
"a goal from Aaron Ramsey secured all three points.",
"ceived summaries.",
"The evaluation process is depicted in Fig. 2. We use Chen et al. (2016)'s model trained on the CNN dataset as our QA system for all our experiments.",
"For a given summarizer and a given dataset, APES reports the average number of questions correctly answered from the summaries produced by the system.",
"This method is especially relevant for the main headline generation dataset used in recent years, the CNN/Daily Mail dataset, as it was initially created for the question answering task by Hermann et al. (2015).",
"It contains 312,085 articles with relevant questions scraped from the two news agencies' websites.",
"The questions were created by removing different entities from the manually produced highlights to create 1,384,887 fill-in-the-blank questions.",
"The dataset was later repurposed by Cheng and Lapata (2016) and Nallapati et al. (2016) to the summarization task by reconstructing the original highlights from the questions.",
"Fig. 3 shows an example for creating questions out of a given summary.",
"When questions are not intrinsically available, one requires to (1) automatically generate relevant questions; (2) use an appropriate automatic QA system.",
"Similarly to the method used in Hermann et al. (2015), we produce fill-in-the-blank questions in the following way: given a reference summary, we find all possible entities, ( i.e. , Name, Nationality, Organization, Geopolitical Entity or Facility) using an NER system (Honnibal and Johnson, 2015) and we create fill-in-the-blank type questions where the answers are these entities.",
"We provide code for this procedure and apply it on the AESOP datasets in our experiments 3 .",
"For the automatic QA system, we reused in our experiment the same QA system trained on CNN/Daily Mail for different News datasets (in-cluding AESOP).",
"To enable reproducibility, the trained models used are available online.",
"To evaluate if an automatic metric can accurately measure a summarization system performance, we measure its correlation to manual metrics.",
"The TAC 2011 Automatically Evaluating Summaries of Peers ( AESOP ) task (Owczarzak and Dang, 2011) has provided a dataset that includes, alongside the source documents and reference summaries, three manual metrics: Pyramid (Nenkova et al., 2007), Overall Responsiveness (Dang, 2005) and Overall Readability.",
"Two sets of documents are provided, we use only the documents from the first set (Generic summarization), as the second set is relevant to the update summarization task.",
"To evaluate APES on the AESOP dataset, we create the required set of questions as presented in Fig. 3. We used the same QA system (Chen et al., 2016) trained on the CNN dataset.",
"This system is a competent QA system for this dataset, as both AESOP and CNN consist of news articles.",
"Training a QA model on the AESOP dataset would be optimal, but it is not possible due to the small size of this dataset.",
"Nonetheless, even this incomplete QA system reports valuable results that justify APES value.",
"While the two datasets are similar, they differ dramatically in the type of topics the articles cover.",
"CNN/Daily Mail articles deal with people, or more generally, Named Entities, averaging 6 named entities per summary.",
"In contrast, TAC summaries average 0.87 entities per summary.",
"The TAC dataset is divided into various topics.",
"The first four topics, Accidents and Natural Disasters , Attacks , Health and Safety and Endangered Resources average 0.65 named entities per summary, making them incomparable to the typical case in the CNN/Daily Mail dataset.",
"The last topic, Investigations and Trials , averages 3.35 named entities per summary, making it more similar.",
"We report correlation only on this segment of TAC, which contains 204 documents.",
"We follow the work of Louis and Nenkova (2013) and compare input level APES scores with manual Pyramid and Responsiveness scores provided in the AESOP task.",
"Results are in Table 1.",
"In Input level , correlation is computed for each summary against its manual score.",
"In contrast, system level reports the average score for a summarization system over the entire dataset.",
"While ROUGE baselines were beaten only by a very small number of suggested metrics in the original AESOP task, we find that APES shows better correlation than the popular R-1, R-2 and R-L, and the strong R-SU.",
"Although showing statistical significance for our hypothesis is difficult because of the small dataset size, we claim APES gives an additional value comparing to ROUGE: ROUGE metrics are highly correlated with each other (around 0.9) as shown in Table 2, indicating that multiple ROUGE metrics provide little additional information.",
"In contrast, APES is not correlated with ROUGE metrics to the same extent (around 0.6).",
"The above suggests that APES offers additional information regarding the text in a manner that ROUGE does not.",
"For this reason, we believe APES complements ROUGE.",
"Louis and Nenkova (2013) further shows that ROUGE correlation to manual scores tends to drop when reducing the number of reference summaries.",
"While APES is not immune to this, as the number of questions becomes smaller when the number of reference summaries is reduced, it still performs well when reducing the number of references to a single document.",
"In the AESOP dataset, when comparing with respect to each of the 8 assessors separately on Pyramid and Respon-Model APES #Entities #Salient Entities See et al. (2017) 38.2 4.90 2.57 Baseline model 39.8 4.99 2.61 Gold Summaries 85.5 6.00 4.90 Table 3: Average number of entities and salient entities.",
"siveness, the correlation of APES is highest in 7 out of 16 trials, while that of R1 is highest in 6 trials and RL in 2 trials.",
"In general, the correlation between any of the metrics and single references is extremely noisy, indicating that reliance on evaluations of a single reference, which is standard on large-scale summarization datasets, is far from satisfactory.",
"We have established that APES achieves equal or improved correlation with manual metrics when compared to ROUGE, and captures a different type of information than ROUGE, by that, APES can complement ROUGE as an automatic evaluation metric.",
"We now turn to develop a model that directly attempts to optimize APES.",
"News articles include a high number of named entities.",
"When analyzing systems performance on APES (Table 3), a system may fail either when it misses to generate a salient entity in the summary, or when it includes the salient entity, but in a context not relevant to corresponding questions.",
"When this happens, the QA system would not be able to identify the entity as an answer to a question referring to the context.",
"We compared the average number and type of entities in summaries generated by existing automatic summarizers to that in reference summaries.",
"We note that the observed models, while producing state-of-the-art ROUGE scores and a high number of named entities (5 vs. 6 on average), fail to focus on salient entities when generating a summary (about 2.6 salient entities are mentioned on average vs. 4.9 in the reference summaries).",
"Notice that solely increasing the number of entities is damaging: mentioning too many entities causes a decrease in the QA accuracy, as the number of possible answers increases, which would distract the QA system.",
"This has motivated us in suggesting the following model.",
"To experiment with direct optimization of APES, we reconstruct as a starting point a model that encapsulates the key techniques used in recent abstractive summarization models.",
"Our model is based on the OpenNMT project (Klein et al., 2017).",
"All PyTorch (Paszke et al., 2017) code, including entities attention and beam search refinement is available online 4 .",
"We also include generated summaries and trained models in this repository.",
"Recent work in the field of abstractive summarization (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017; Paulus et al., 2017) share a common architecture as the foundation for their neural models: an encoder-decoder model (Sutskever et al., 2014) with an attention mechanism (Bah-danau et al., 2014).",
"Nallapati et al. (2016) and See et al. (2017) augment this model with a copy mechanism (Vinyals et al., 2015).",
"This architecture minimizes the following loss function: loss t = log P ( w t ) loss = 1 T y T y (cid:88) t =0 loss t (1) loss t , is the negative log likelihood of generating the gold target word w t at timestep t where P ( ) is the probability distribution over the vocabulary.",
"We refer the reader to See et al. (2017) for a more detailed description of this architecture.",
"Unlike See et al. (2017), we do not train a specific coverage mechanism to avoid repetitions.",
"Instead, we incorporate Wu et al. (2016)'s refine-ments of beam search in order to manipulate both the summaries' coverage and their length.",
"In the standard beam search, we search for a sequence Y that maximizes a score function s ( Y, X ) = log( P ( Y | X )) .",
"Wu et al. (2016) introduce two additional regularization factors, coverage penalty and length penalty .",
"These two penalties, with an additional refinement suggested in Gehrmann et al. (2018), yield the following score function: 4 www.github.com/mataney/APES-optimizer s ( Y, X ) = log( P ( Y | X )) /lp ( Y ) cp ( X ; Y ) lp ( Y ) = (5 + | Y | ) (5 + 1) cp ( X ; Y ) = ( TX + TX (cid:88) i =1 max( TY (cid:88) j =1 a i,j , 1 .",
"0)) (2) where , are hyper-parameters that control the length and coverage penalties respectively and a i,j is the attention probability of the j -th target word on the i -th source word.",
"cp ( X ; Y ) , the coverage penalty, is designed to discourage repeated attention to the same source word and favor summaries that cover more of the source document with respect to the attention distribution.",
"lp ( Y ) , the length normalization, is designed to compare between beam hypotheses of different length accurately.",
"In general, beam search favors shorter outputs as log-probability is added at each step, yielding lower scores for longer sequences.",
"lp compensates for this tendency.",
"In the following section, we describe how we extend this baseline model in order to maximize the APES metric.",
"The new model learns to incorporate more of the salient entities from the source document in order to optimize its APES metric.",
"As we observed, failure to capture salient entities in summaries is one cause for low APES score.",
"To drive our model towards the identification and mention of salient entities from the source document, we introduce an additional attention layer that learns the important entities of a source document.",
"We hypothesize that these entities are more likely to appear in the target summary, and thus are better candidate answers to one of the salient questions for this document.",
"We learn for each word in the source document its probability of belonging to a salient entity mention.",
"We adopt the classical soft attention mechanism of Bahdanau et al. (2014): after encoding the source document, we run an additional single alignment model with an empty query and a sigmoid layer instead of the standard softmax layer.",
"where U, b, v are learnable weight matrices, h j is the encoder hidden state for the j -th word and ( ) is a logistic sigmoid function.",
"a ej reflects the probability of the j -th token of being a salient entity.",
"The second modification comparing to Bahdanau et al. (2014) is that we replace the softmax function with a sigmoid: while in the standard alignment model, we intend to obtain a normalized probability distribution over all the tokens of the source document, here we would like to get a probability of each token being a salient entity independently of other tokens.",
"In order to drive this attention layer towards salient entities, we define an additional term in the loss function.",
"where s is a binary vector of source length size, where s j = 1 if x j is a salient entity, and 0 otherwise, and BCE is the binary cross entropy function.",
"This term is added to the standard log-likelihood loss, changing equation (1) to the following composite loss function: loss = loss e + (1 ) 1 T y T y (cid:88) t =0 loss t (5) where is a hyper-parameter.",
"We join these two terms in the loss function in order to learn the entities attention layer while keeping the summarization ability learned by Eq.",
"(1).",
"After the attention layer has learned the probability of each source token to belong to a salient entity, we pass the predicted alignment to the beam search component at test-time.",
"Using this alignment data, we wish to encourage beam search to favor hypotheses attending salient entities.",
"Accordingly, we introduce a new term ep to the beam search score function of equation (2): s ( Y, X ) = log( P ( Y | X )) /lp ( Y ) cp ( X ; Y ) ep ( X ; Y ) ep ( X ; Y ) = TX (cid:88) i =1 max( a ei TY (cid:88) j =1 a i,j , 0 .",
"0) (6) ep ( X ; Y ) penalizes summaries that do not attend parts of the source document we believe are central.",
"Fig. 4 compares summaries produced by this model and the baseline model by showing their respective attention distribution and the impact on the decision of which words to include in the summary based on the attention level derived from salient entities.",
"We report our results in Table 4.",
"For each system, we present its APES score alongside its F1 scores for ROUGE-1, ROUGE-2 and ROUGE-L, computed using pyrouge 5 .",
"We first report APES results on full source documents and gold summaries, in order to assess the capabilities of the QA system used for APES.",
"A simple answer extractor could answer 100% of the questions given the gold-summaries.",
"But the QA system is trained over the source documents and learns to generalize and not just extract the answer.",
"Answering questions from the full documents is indeed more difficult than from the gold-summaries because the QA system must locate the answer among multiple distractors.",
"While gold-summaries present a very high APES score, the 5 https://pypi.org/project/pyrouge/ Source document: jack wilshere may rub shoulders with the likes of alexis sanchez and mesut ozil on a daily basis but he was left starstruck on thursday evening when he met brazil legend pele .",
"We then present shuffled gold-summaries, where we randomly shuffled the location of each unigram in the gold summary.",
"This score shows that even when all salient entities are in the shuffled text, APES is sensitive to the loss of coherence, readability and meaning.",
"This confirms that APES does not only match the presence of entities.",
"In contrast, ROUGE-1 fails to punish such incoherent sequences.",
"Finally, we report ROUGE and APES for the strong Lead 3 sentences of the source document a baseline known to beat most existing abstractive methods.",
"We then present APES and ROUGE scores for abstractive models, See et al. (2017)'s model, our baseline model and our APES-optimized model.",
"Our model achieves significantly higher APES scores (46.1 vs. 39.8) and improves all ROUGE metrics (by about 1 F-point over the baselines).",
"The scores on the validation set are 46.6, 41.2, 18.4, 38.1 for APES, R1, R2, RL respectively.",
"While our objective is maximizing APES score, our model also increases its corresponding ROUGE scores.",
"Unlike Paulus et al. (2017) where the authors suggested a Reinforcement Learning based model to optimize ROUGE specifically, we optimize for APES and gain better ROUGE score.",
"We finally report the results obtained by our model when gold salient entities positions are given as oracle inputs instead of the predicted a e scores.",
"The corresponding score (46.3 vs. 46.1) is only slightly above the score obtained by our model.",
"This indicates that the component of our model predicting entity saliency is good enough to drive summarization.",
"We carried out an informal error analysis to examine why some summaries perform worse than others with our architecture.",
"We compared summaries that produce perfect APES score (1,630 out of 11,490 total) to the summaries with zero APES score (1,691).",
"We measure the density of salient named entities in the source document: #(salient entity mentions)/#(distinct salient entities).",
"This density in the case of perfect APES summaries is much higher than that for low APES summaries (4.9 vs. 3.6).",
"This observation suggests that we fail to produce higher APES scores when the salient entities aren't marked through sheer repetition.",
"We introduced APES, a new automatic summarization evaluation metric for news articles datasets based on the ability of a summary to answer questions regarding salient information from the text.",
"This approach is useful in domains with source documents of about 1k words that focus on named entities such as news articles, where named entities are effectively aligned with Pyramid SCUs.",
"In other non-news domains, and longer documents, other methods for generating questions should be designed.",
"We compare APES to manual evaluation metrics on the TAC 2011 AESOP task and confirm its value as a complement to ROUGE.",
"We introduce a new abstractive model that optimizes APES scores on the CNN/Daily Mail dataset by attending salient entities from the input document, which also provides competitive ROUGE scores.",
"This research was supported by the Lynn and William Frankel Centre for Computer Science at Ben-Gurion University."
] | [
"abstain",
"method",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"objective",
"other"
] |
[
"Numerical reasoning over hybrid data containing both textual and tabular content (e.g., financial reports) has recently attracted much attention in the NLP community.",
"However, existing question answering (QA) benchmarks over hybrid data only include a single flat table in each document and thus lack examples of multistep numerical reasoning across multiple hierarchical tables.",
"To facilitate data analytical progress, we construct a new large-scale benchmark, MULTIHIERTT , with QA pairs over Multi Hier archical T abular and T extual data.",
"MULTIHIERTT is built from a wealth of financial reports and has the following unique characteristics:",
"1) each document contain multiple tables and longer unstructured texts;",
"2) most of tables contained are hierarchical;",
"3) the reasoning process required for each question is more complex and challenging than existing benchmarks; and",
"4) fine-grained annotations of reasoning processes and supporting facts are provided to reveal complex numerical reasoning.",
"We further introduce a novel QA model termed MT2Net, which first applies facts retrieving to extract relevant supporting facts from both tables and text and then uses a reasoning module to perform symbolic reasoning over retrieved facts.",
"We conduct comprehensive experiments on various baselines.",
"The experimental results show that MULTIHIERTT presents a strong challenge for existing baselines whose results lag far behind the performance of human experts.",
"The dataset and code are publicly available at https://github.",
"com/psunlpgroup/MultiHiertt .",
"In recent years, as key to many NLP tasks such as QA, there is a flurry of works on numerical reasoning over various types of data including textual data (Dua et al., 2019; Amini et al., 2019; Xie and Sun, 2019) and tabular data (Moosavi et al., 2021; Suadaa et al., 2021).",
"More recently, numerical Figure 1: An example of MULTIHIERTT : The system needs to first locate which segment got the most funds in 2017 in the second hierarchical table, then select relevant numbers from the first hierarchical table and generate the correct reasoning program to get the answer.",
"reasoning over hybrid data containing both textual and tabular content (Zhu et al., 2021; Chen et al., 2021) has attracted much attention.",
"For example, 6588 the FinQA dataset (Chen et al., 2021) focuses on questions that require numerical reasoning over financial report pages, e.g., \"What portion of the total identifiable net assets is in cash?\".",
"Such questions need the system to locate relevant cells in the tabular content and then perform a division operation to get the final answer.",
"However, existing QA datasets over hybrid data only contain a single flat table in each document (Zhu et al., 2021; Chen et al., 2021).",
"Therefore, they lack examples that require multi-step reasoning processes across multiple paragraphs and hierarchical tables.",
"Hierarchical tables are widely used in scientific or business documents.",
"A hierarchical table usually contains multi-level headers, which makes cell selection much more challenging because it requires multi-level and bi-dimensional indexing techniques.",
"For instance, consider the example of our proposed dataset MULTIHIERTT in Figure 1, each table contains both column headers and row headers, which are hierarchical in nature.",
"And ignoring the row / column headers or not reasoning on the entire header hierarchy may lead to the wrong result.",
"For instance, in the given example, if the system simply searched for cells with a flat row header containing \"Product\" and \"Service\" and column header containing \"2018\", it may mistakenly return the value 2,894 and 382 appearing in the beginning of the first table.",
"Additionally, in real life, when analyzing financial reports, professionals such as analysts or investors often refer to multiple hierarchical tables and multiple paragraphs to obtain conclusions.",
"For instance, finding \"the segments with most funds in 2017\" requires the system to locate and perform numerical reasoning on the second hierarchical table.",
"Then the system should use the results gained from the second table to reason on the first table.",
"However, existing QA datasets lack such examples of reasoning across multiple tables.",
"To address these shortcomings, we present MULTIHIERTT : an expert-annotated dataset that contains 10,440 QA pairs, along with annotations of reasoning processes and supporting facts.",
"To the best of our knowledge, MULTIHIERTT is the first dataset for solving complicated QA tasks over documents containing multiple hierarchical tables and paragraphs.",
"In addition, to address the challenge of MULTIHIERTT , we propose MT2Net to first retrieve supporting facts from financial reports then generate executable reasoning programs to answer the questions.",
"Our experiments show that MT2Net outperforms all other baselines and achieves 38.43% F1 score.",
"However, all models still lag far behind the performance of human experts with 87.03% in F1.",
"It demonstrates MULTIHIERTT presents a strong challenge for existing baseline models and is a valuable benchmark for future research.",
"The main contribution of this work can be summarized as follows: We propose a new large-scale dataset MULTIHIERTT .",
"It contains 10,440 examples along with fully annotated numerical reasoning processes and supporting facts.",
"A strict quality control procedure is applied to ensure the meaningfulness, diversity, and correctness of each annotated QA example.",
"Compared with existing datasets, each document in MULTIHIERTT contains multiple hierarchical tables and longer unstructured text.",
"A more complex reasoning process across multiple tables and paragraphs is required to correctly answer the question.",
"We propose a novel QA model, MT2Net.",
"The model first applies facts retrieving to extract relevant supporting facts from both hierarchical tables and text.",
"And it then uses a reasoning module to reason over retrieved facts.",
"Comprehensive experiments are conducted on various baselines.",
"The experimental results demonstrate that the current QA models still lag far behind the human expert performance, and further research is needed.",
"Question Answering Benchmark There are numerous QA datasets focusing on text, ta-ble/knowledge base (KB), and hybrid data.",
"SQuAD (Rajpurkar et al., 2016) and CNN/Daily Mail (Hermann et al., 2015) are classic datasets for textual data.",
"Table/KB QA datasets mainly focus on structured tables (Pasupat and Liang, 2015; Zhong et al., 2017; Yu et al., 2018; Nan et al., 2022) and knowledge bases (Berant et al., 2013; Yih et al., 2015; Talmor and Berant, 2018; Xie et al., 2022).",
"And some recent works focus on reasoning over more complex tables including hierarchical tables (Cheng et al., 2021b; Katsis et al., 6589 QA Dataset Textual & Tabular Data / Doc (DB) NumericalReasoning # Doc (DB) # Questions Avg.",
"2021).",
"More recently, there are also some pioneering studies working on QA over hybrid data.",
"Specifically, HybridQA (Chen et al., 2020), TAT-QA (Zhu et al., 2021), and FinQA (Chen et al., 2021) focus on both textual and tabular data, while MMQA (Talmor et al., 2021) focus on QA over text, tables, and images.",
"In addition, reasoning including numerical reasoning and multi-hop reasoning has gained attention lately.",
"For example, DROP (Dua et al., 2019) is a machine reading comprehension benchmark that requires numerical reasoning on text data.",
"HotpotQA (Yang et al., 2018) and HybridQA (Chen et al., 2020) are datasets requiring multi-hop reasoning.",
"Numerical Reasoning Numerical reasoning plays an important role in different NLP tasks (Dua et al., 2019; Zhang et al., 2021; Chen et al., 2021; Zhu et al., 2021).",
"To enhance the model's numerical reasoning ability, some work adapt standard extractive QA models with specialized modules to perform numerical reasoning (Ran et al., 2019; Hu et al., 2019).",
"Recent work also focus on probing and injecting numerical reasoning skills to pretrained language models (Geva et al., 2020; Lin et al., 2020; Zhang et al., 2020; Berg-Kirkpatrick and Spokoyny, 2020).",
"Meanwhile, various benchmarks and models are proposed to solve math word problems (Koncel-Kedziorski et al., 2016; Xie and Sun, 2019; Amini et al., 2019; Hendrycks et al., 2021; Hong et al., 2021; Cobbe et al., 2021).",
"The most recent numerical reasoning QA benchmark over hybrid data are FinQA (Chen et al., 2021) and TAT-QA (Zhu et al., 2021).",
"Financial NLP Financial NLP has attracted much attention recently.",
"There have been various application in different tasks like risk management (Han et al., 2018; Theil et al., 2018; Nourbakhsh and Bang, 2019; Mai et al., 2019; Wang et al., 2019), asset management (Filgueiras et al., 2019; Blumenthal and Graf, 2019), market sentiment analysis (Daudert et al., 2018; Tabari et al., 2018; Buechel et al., 2019), financial event extraction (Ein-Dor et al., 2019; Zhai and Zhang, 2019) and financial question answering (Lai et al., 2018; Maia et al., 2018).",
"More recently, pre-trained language models are presented for finance text mining (Araci, 2019; Yang et al., 2020).",
"The most relevant work to us is FinQA (Chen et al., 2021) and TAT-QA (Zhu et al., 2021), which both construct a QA dataset acquiring numerical reasoning skills on financial reports with tabular data.",
"MULTIHIERTT are deployed based on the FinTabNet dataset (Zheng et al., 2021), which contains 89,646 pages with table annotations extracted from the annual reports of S&P 500 companies.",
"For each table contained, the FinTabNet dataset provides a detailed HTML format annotation, in which table hierarchies and cell information such as text and 6590 What is the total amount of options granted and accepted in 2007 for exercise price?",
"The raw data is filtered as follows: First, we extract documents with 1 to 4 pages and 2 to 6 tables from FinTabNet.",
"Second, we filter out the documents with limited textual contents.",
"Third, as we aim for the numerical reasoning ability, we also exclude documents with tables containing little numerical information.",
"Then, we use a pre-processing script to extract the hierarchical structure of each HTML-format table.",
"And we ignore those tables that cannot be handled by the pre-processing script.",
"As a result, a total of 4,791 documents were selected for further annotation.",
"For each document selected in 3.1, the annotators are required to compose one or two QA examples along with detailed annotation.",
"The process of annotating each QA example is as follows:",
"1) The annotators are first asked to compose a complex question that requires numerical reasoning and is meaningful for helping novices understand the annual reports.",
"The annotators are encouraged to compose questions that require the information from both the textual and tabular content or from multiple tables.",
"2) For those questions requiring numerical expression, the annotators are then asked to write down the reasoning programs to answer the question.",
"In detail, the annotators are asked to elaborate on the operation steps to answer the question.",
"The definitions of all operations are shown in Table 7 in Appendix.",
"3) They are also required to mark all the supporting facts from tabular and textual content for each question.",
"Strict quality control procedures are designed to ensure the quality of dataset annotation, especially the diversity and meaningfulness of proposed questions.",
"The human evaluation scores and inter-evaluator agreements are reported in Table 2.",
"Expert Annotators To help improve the annotation process, we first enroll five experts with professional experience in finance.",
"During annotation, they are asked to provide feedback regarding the task instructions and the user experience of the annotation interface, based on which we iteratively modify the annotation guideline and interface design.",
"In the stage of crowd-sourced annotation, we hire 23 graduate students (14 females and 9 males) majoring in finance or similar discipline.",
"Before starting the official annotation process, each annotator is given a two-hour training session to learn 6591 the requirements and the annotation interface.",
"Annotation De-Biasing As suggested in previous research (Kaushik and Lipton, 2018; Clark et al., 2019; Jiang and Bansal, 2019; Yang et al., 2022), consider annotation bias of QA benchmarks is of great significance.",
"During the pilot annotation period, we found that when generating question-answering pairs, annotators may prefer simpler ones.",
"To solve this issue, we use thresholds to restrict the proportions of questions with different numbers of numerical reasoning steps.",
"Meanwhile, the proportions of questions with span selection answer types are set to 20%.",
"To further increase the diversity of question-answer pair annotation, we also select and include 2,119 QA examples from FinQA (Chen et al., 2021).",
"Multi-Round Validation To further ensure the diversity and correctness of proposed question-reasoning pairs, each document is assigned to three annotators and one verifier in order.",
"For annotators, each is required to first validate the previous annotator's annotation and fix the mistakes if there are.",
"Then, they are asked to create one or two more question-reasoning pairs that are different from the existing ones.",
"After each annotator finishes tasks, we assign another verifier with good performance on this project to validate all the annotations.",
"Core statistics of MULTIHIERTT are reported in Table 3.",
"Table 1 shows a comprehensive comparison of related datasets.",
"MULTIHIERTT is the first dataset to study numerical reasoning questions over hybrid data containing multiple hierarchical tables.",
"Compared with TAT-QA and FinQA, documents in MULTIHIERTT contain longer unstructured input text and multiple tables, making the evidence retrieving and reasoning more challenging.",
"And MULTIHIERTT has diverse and complex questions, as illustrated in Figure 2.",
"We also analyze supporting facts coverage for each question.",
"In MULTIHIERTT , 1) 10.24% of the questions only require the information in the paragraphs to answer;",
"2) 33.09% of the questions only require the information in one table to answer;",
"3) 7.93% require the information in more than one table but without paragraphs to answer;",
"4) 48.74% require both the text and table information to answer, and among them, 23.20% required the information in more than one table.",
"The average number of annotated supporting facts are 7.02.",
"Meanwhile, among those questions with annotated numerical reasoning programs, 28.94% of them have 1 step; 37.76% of them have 2 steps; 15.21% of them have 3 steps; and 18.10% of them have more than 3 steps.",
"As a result, the average number of numerical reasoning steps is 2.47.",
"To address the challenge of MULTIHIERTT , we propose a framework named MT2Net.",
"Figure 3 gives an overview of our proposed model.",
"MT2Net first applies fact retrieving module to extract relevant supporting facts from the hierarchical tables and paragraphs.",
"Then, a reasoning module is adapted to perform reasoning over retrieved facts and get the final answer.",
"Fact Retrieving Module The whole input text in each document of MULTIHIERTT can exceed 3,000 tokens and contain many numbers, which is beyond the capability of the current popular QA models (Devlin et al., 2019; Liu et al., 2019).",
"Therefore, we employ a fact retrieving module to first retrieve the supporting facts from the documents.",
"Previous works on hybrid datasets (Zhu et al., 2021; Chen et al., 2021; Li et al., 2021) use templates to flatten each row of the table into sentences.",
"And our facts retrieving module applies similar ideas.",
"However, different from other hybrid datasets, most tables in MULTIHIERTT are hierarchical.",
"Therefore, we turn each cell into a sentence, along with its hierarchical row and column headers.",
"For example, the first data cell in the first table in Figure 1 is translated as \"For Innovation Systems of Segment, sales of product in 2018, Year Ended December 31 is 2,894\".",
"bi-classifier (Devlin et al., 2019).",
"During the inference stage, the topn sentences are retrieved as supporting facts.",
"They are reordered according to the order of appearance in the original document.",
"Then they will serve as input to reasoning module.",
"Reasoning Module We first use pre-trained LMs to encode the retrieved sentences from the facts retrieving module.",
"Then, we divide the answers into two types: arithmetic program and span.",
"For each answer type, we use a unique sub-module to calculate the conditional answer probability P ( answer | type ) : Program sub-module : The structure is similar with the program generator of FinQANet (Chen et al., 2021).",
"The sub-module aims to generate the executable program to answer the question.",
"Specifically, an LSTM is used for decoding.",
"At each decoding step T , the LSTM can generate one token from",
"1) the numbers from the retrieved,",
"2) pre-defined operators, and",
"3) the tokens already generated in the previous steps.",
"After the completion of generation, the sub-module will execute the generated programs and get the predicted answer.",
"Span sub-module : The span sub-module aims to select the predicted span candidate, which is a span of retrieved sentences.",
"The answer probability is defined as the product of the probabilities of the start and end positions in the retrieved evidence.",
"Meanwhile, an extra output layer is used to predict the probability P ( type ) of each answer type.",
"In particular, we take the output vector [CLS] from LMs as input to compute the probability.",
"In the training stage, the final answer probability is defined as the joint probability over all feasible answer types, i.e., (cid:80) type P ( type ) P ( answer | type ) .",
"Here, both P ( type ) and P ( answer | type ) is learned by the model.",
"In the inference stage, the model first selects the most probable answer type and then uses corresponding sub-modules to predict the answer.",
"TAGOPTAGOP 1 is the baseline model for TAT-QA dataset (Zhu et al., 2021).",
"It first uses sequence tagging with the InsideOutside tagging (IO) approach to extract supporting facts.",
"Then an operator classifier is applied to decide which operator is used to infer the final answer via extracted facts.",
"Different from ours, TAGOP can only perform symbolic reasoning with a single type of pre-defined aggregation operators (e.g. change Ratio, division), and might fail to answer complex questions requiring multi-step reasoning.",
"FinQANet FinQANet 2 is the baseline model for FinQA dataset (Chen et al., 2021).",
"It first uses a BERT-based retriever to take the topn supporting facts.",
"Then a program generator is applied to generate the reasoning programs to get the final answers.",
"Different from ours, FinQANet ignores the hierarchical structure of tables when linearizing each row of a table.",
"And it is not designed to answer span selection questions.",
"Longformer + Reasoning module To demonstrate the necessity of breaking up models into facts retrieving and reasoning modules, we directly use the pre-trained Longformer-base 3 (Beltagy et al., 2020) as the input encoder in the reasoning module, and encode the whole document.",
"Fact Retrieving Module + TAPAS We employ TAPAS (MASKLM-base) 4 (Herzig et al., 2020; Eisenschlos et al., 2020) as a baseline over tabular data.",
"TaPas is pretrained over large-scale tables and associated text from Wikipedia jointly.",
"To finetune it, we use the table with most supporting facts along with the answer as input for each example.",
"For the inference stage, the table with most portion of top-15 retrieved facts is used as input.",
"Fact Retrieving + NumNet NumNet+ 5 (Ran et al., 2019) has demonstrated its effectiveness on the DROP dataset (Dua et al., 2019).",
"It designs a NumGNN between the encoding and prediction module to perform numerical comparison and numerical reasoning.",
"However, NumNet+ only supports addition and subtraction when performing symbolic reasoning, thus cannot handle those complex questions requiring operators such as division.",
"Fact Retrieving Module + Seq2Prog A Seq2Prog architecture adopted from baseline of MathQA dataset (Amini et al., 2019) is used as the reasoning module.",
"Specifically, we use a biLSTM encoder and an LSTM decoder with attention.",
"For the fact retrieving module, we use BERT-base as the classifier.",
"Since most of the examples in our dataset have less than 7 supporting facts (89.3%), and we find that longer inputs might lower the performance of the reasoning module, we take the top10 retrieving facts as the retriever results.",
"For the reasoning module, we experiment on using BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) as the encoder.",
"We use the Adam optimizer (Kingma and Ba, 2014) for all models.",
"The 3 https://github.com/allenai/longformer 4 https://github.com/google-research/ tapas 5 https://github.com/llamazing/numnet_ plus Dev Test EM F 1 EM F 1 Longformer + Reasoning 2.71 6.93 2.86 6.23 Facts Retrieving + TAPAS 8.94 10.70 7.67 10.04 Facts Retrieving + NumNet 10.32 12.59 10.77 12.02 TAGOP (RoBERTa-large) 19.16 21.08 17.81 19.35 Facts Retrieving + Seq2Prog 26.19 28.74 24.58 26.30 FinQANet (RoBERTa-large) 32.41 35.37 31.72 33.60 MT2Net (BERT-base) 33.68 35.94 32.07 33.67 MT2Net (BERT-large) 34.03 36.13 33.25 34.98 MT2Net (RoBERTa-base) 35.69 37.81 34.32 36.17 MT2Net (RoBERTa-large) 37.05 39.96 36.22 38.43 Human Expert Performance 83.12 87.03 Table 4: Performance of MT2Net compared with different baseline models on the dev and test sets of MULTIHIERTT .",
"training of all models is conducted on RTX 3090s.",
"All the implementation of LMs is based on the hug-gingface transformers library.",
"To ensure fairness, we set batch size as 32 for all baseline models.",
"For Evaluation Metrics, following TAT-QA (Zhu et al., 2021), we report exact matching (EM) and adopted numeracy-focused F 1 (Dua et al., 2019).",
"To test the performance of the human expert on MULTIHIERTT , we invite another two professionals.",
"We randomly sampled 60 examples from the test set, and ask them to answer the questions individually within three hours.",
"The results are reported in the last row of Table 4.",
"Table 4 summarizes our evaluation results of different models.",
"We use the same fact retrieving results for all \"Retrieving + Reasoning\" models.",
"For the fact retrieving module, we have 76.4% recall for the top10 retrieved facts and 80.8% recall for the top15 retrieved facts.",
"Necessity of applying retrieving-reasoning pipeline Directly using an end-to-end pretrained Longformer model to replace a retrieving module falls far behind.",
"This makes sense because longer input contains much irrelevant numerical information, which makes the reasoning module difficult to learn.",
"worse than MT2Net because they ignore the table's hierarchical structure in the retrieving part.",
"Different from ours, which flatten each cell with its header hierarchical structures, both TAGOP and FinQANet flatten each table by rows, losing the table's hierarchical structure information.",
"Necessity of an effective reasoning module Most questions in MULTIHIERTT require models to perform multi-step reasoning and integrate different kinds of operators.",
"Generally, the reasoning module generating reasoning programs to get answers performs better than directly generating answers by end-to-end method, i.e. adopted TAPAS.",
"Both adopted NumNet and TAGOP perform much worse than MT2Net because they only support limited symbolic reasoning.",
"Specifically, TAGOP can only perform with a single type of pre-defined aggregation operator for each question, and NumNet only supports addition and subtraction operators when performing symbolic reasoning.",
"By contrast, MT2Net performs better than FinQANet and Seq2Prog because it applies different sub-modules to answer questions with different answer types.",
"The results also show that larger pre-trained models have better performance.",
"This is because they are pre-trained on more financial corpus.",
"However, all the models perform significantly worse than human experts, indicating MULTIHIERTT is challenging to state-of-the-art QA models and there is a large room for improvements for future research.",
"To guide the future directions of model improvement, various performance breakdown experiments on the test set are conducted using the MT2Net (RoBERTa-large) model.",
"Table 5 shows the results.",
"Generally, the model has a much lower accuracy on questions with more than two numerical reasoning steps.",
"Meanwhile, the model performs poorly on questions requiring cross-table supporting facts.",
"We further investigate the proposed MT2Net by analyzing error cases.",
"We randomly sample 100 error cases from the results of the MT2Net (RoBERTa-large) model on the test set, and classify them into four main categories as shown in Table 6, along with examples.",
"The analysis shows that around 64% error (Wrong Operand/Span+Missing Operand) is caused by the failure to integrate the supporting facts correctly.",
"Meanwhile, the current model fails to integrate external finance knowledge to answer questions.",
"Although the proposed MT2Net model outperforms other baseline models, it still performs significantly worse than human experts, which reflects the challenge of MULTIHIERTT .",
"Primarily, we find that models do not perform well on certain types of questions:",
"1) questions requiring reasoning across multiple tables;",
"2) questions requiring multi-step reasoning;",
"3) questions requiring reasoning over tables with complex hierarchical structures; and",
"4) questions requiring external financial knowledge.",
"four main directions of work may be workable:",
"1) designing a specialized module to handle multi-table reasoning;",
"2) decomposing a complex question requiring multi-step reasoning into several simpler sub-questions that QA models can handle (Perez et al., 2020; Chen et al., 2020);",
"3) applying a more advanced table-encoding method.",
"For example, a pre-trained model with specialized table structure-aware mechanisms (Wang et al., 2021; Cheng et al., 2021a; Yang et al., 2022) can be utilized in the facts retrieving module to better understand hierarchical tables; and",
"4) leveraging structured knowledge (Xie et al., 2022) to inject external financial knowledge to models.",
"We have proposed MULTIHIERTT , a new large-scale QA dataset that aims to solve complicated QA tasks that require numerical reasoning over documents containing multiple hierarchical tables and paragraphs.",
"To address the challenge of MULTIHIERTT , we introduce a baseline framework named MT2Net.",
"The framework first retrieves supporting facts from financial reports and then generates executable reasoning programs to answer the question.",
"The results of comprehensive experiments showed that current QA models (best F 1 : 38.43 % ) still lag far behind the human expert performance (F 1 : 87.03 % ).",
"This motivates further research on developing QA models for such complex hybrid data with multiple hierarchical tables.",
"Data in MULTIHIERTT is collected from the FinQA dataset (Chen et al., 2021) and FinTabNet dataset (Zheng et al., 2021).",
"FinQA is publicly available under the MIT license 6 .",
"FinTabNet is publicly available under the license CDLA-Permissive-1.0 7 .",
"Both licenses permits us to compose, modify, publish, and distribute additional annotations upon the original dataset.",
"For the internal annotation of MULTIHIERTT , each expert is paid $20 per hour.",
"For the external annotation, we hire 23 graduate students majoring in finance or similar disciplines.",
"We regard creating one question-reasoning pair, or validating one document's annotation as a unit task.",
"And we pay around $1.1 for each unit task.",
"Averagely, an annotator can finish 7 unit tasks per hour after training 6 https://opensource.org/licenses/MIT 7 https://cdla.dev/permissive-1-0/ and practicing.",
"And the hourly rates are in the range of $6 and $9 based on the different working speed (above the local average wage of similar jobs).",
"In total, the approximate working hours to annotate MULTIHIERTT dataset is 1500 hours.",
"The whole annotation work lasts about 70 days.",
"We appreciate all the annotators' efforts to construct MULTIHIERTT .",
"And we would like to thank the anonymous reviewers and action editors for their constructive discussions and feedback."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Abstract",
"Non-autoregressive (NAR) neural machine translation is usually done via knowledge distillation from an autoregressive (AR) model.",
"Under this framework, we leverage large monolingual corpora to improve the NAR model's performance, with the goal of transferring the AR model's generalization ability while preventing overfitting.",
"On top of a strong NAR baseline, our experimental results on the WMT14 En-De and WMT16 En-Ro news translation tasks confirm that monolingual data augmentation consistently improves the performance of the NAR model to approach the teacher AR model's performance, yields comparable or better results than the best non-iterative NAR methods in the literature and helps reduce overfitting in the training process.",
"Neural machine translation (NMT) (Sutskever et al., 2014; Bahdanau et al., 2014) has achieved impressive performance in recent years, but the autoregressive decoding process limits the translation speed and restricts low-latency applications.",
"To mitigate this issue, many non-autoregressive (NAR) translation methods have been proposed, including latent space models (Gu et al., 2017; Ma et al., 2019; Shu et al., 2019), iterative refinement methods (Lee et al., 2018; Ghazvininejad et al., 2019), and alternative loss functions (Libovick`y and Helcl, 2018; Wang et al., 2019; Wei et al., 2019; Li et al., 2019; Shao et al., 2019).",
"The decoding speedup for NAR models is typically 2-15 depending on the specific setup (e.g., the number of length candidates, number of latent samples, etc.), and NAR models can be tuned to achieve different trade-offs between time complexity and decoding quality (Gu et al., 2017; Wei et al., 2019; Ghazvininejad et al., 2019; Ma et al., 2019).",
"Although different in various aspects, all of these methods are based on transformer modules (Vaswani et al., 2017), and depend on a well-trained AR model to obtain its output translations to create targets for NAR model training.",
"This training setup is well-suited to leverage external monolingual data, since the target side of the NAR training corpus is always generated by an AR model.",
"Techniques like backtranslation (Sennrich et al., 2015a) are known to improve MT performance using monolingual data alone.",
"However, to the best of our knowledge, monolingual data augmentation for NAR-MT has not been reported in the literature.",
"In typical NAR-MT model training, an AR teacher provides a consistent supervision signal for the NAR model; the source text that was used to train the teacher is decoded by the teacher to create synthetic target text.",
"In this work, we use a large amount of source text from monolingual corpora to generate additional teacher outputs for NAR-MT training.",
"We use a transformer model with minor structural changes to perform NAR generation in a non-iterative way, which establishes stronger baselines than most of the previous methods.",
"We demonstrate that generating additional training data with monolingual corpora consistently improves the translation quality of our baseline NAR system on the WMT14 En-De and WMT16 En-Ro translation tasks.",
"Furthermore, our experiments show that NAR models trained with increasing amount of extra monolingual data are less prone to overfitting and generalize better on longer sentences.",
"In addition, we have obtained Ro En and En De results which are state-of-the-art for non-iterative NAR-MT, just by using more monolingual data.",
"Most of the previous methods treat the NAR modeling objective as a product of independent token probabilities (Gu et al., 2017), but we adopt a different point of view by simply treating the NAR model as a function approximator of an existing AR model.",
"Given an AR model and a source sentence, the translation process of the greedy output 1 of the AR model is a complex but deterministic function.",
"Since the neural networks can be near-perfect nonlinear function approximators (Liang and Srikant, 2016), we can expect an NAR model to learn the AR translation process quite well, as long as the model has enough capacity.",
"In particular, we first obtain the greedy output of a trained AR model, and use the resulting paired data to train the NAR model.",
"Other papers on NAR-MT (Gu et al., 2017; Lee et al., 2018; Ghazvininejad et al., 2019) have used AR teacher models to generate training data, and this is a form of sequence-level knowledge distillation (Kim and Rush, 2016).",
"Throughout this paper, we focus on non-iterative NAR methods.",
"We use standard transformer structures with a few small changes for NAR-MT, which we describe below.",
"For the target side input, most of the previous work simply copied the source side as the decoder's input.",
"We propose a soft copying method by using a Gaussian kernel to smooth the encoded source sentence embeddings x enc .",
"Suppose the source and target lengths are T and T (cid:48) respectively.",
"Then the t th input token for the decoder is (cid:80) Ti =1 x enci K ( i, t ) , where K ( i, t ) is the Gaussian distribution evaluated at i with mean TT (cid:48) t and variance 2 .",
"( 2 is a learned parameter.)",
"We modify the attention mask so that it does not mask out the future tokens, and every token is 1 By greedy', we mean decoding with a beam width of",
"dependent on both its preceding and succeeding tokens in every layer.",
"Gu et al. (2017), Lee et al. (2018), Li et al. (2019) and Wang et al. (2019) use an additional positional self-attention module in each of the decoder layers, but we do not apply such a layer.",
"It did not provide a clear performance improvement in our experiments, and we wanted to reduce the number of deviations from the base transformer structure.",
"Instead, we add positional embeddings at each decoder layer.",
"We use a simple method to select the target length for NAR generation at test time (Wang et al., 2019; Li et al., 2019), where we set the target length to be T (cid:48) = T + C , where C is a constant term estimated from the parallel data and T is the length of the source sentence.",
"We then create a list of candidate target lengths ranging from [ T (cid:48) B, T (cid:48) + B ] where B is the half-width of the interval.",
"For example, if T = 5 , C = 1 and we used a half-width of B = 2 , then we would generate NAR translations of length [4 , 5 , 6 , 7 , 8] , for a total of 5 candidates.",
"These translation candidates would then be ranked by the AR teacher to select the one with the highest probability.",
"This is referred to as length-parallel decoding in Wei et al. (2019).",
"Augmenting the NAR training corpus with monolingual data provides some potential benefits.",
"Firstly, we allow more data to be translated by the AR teacher, so the NAR model can see more of the AR translation outputs than in the original training data, which helps the NAR model generalize better.",
"Secondly, there is much more monolingual data than parallel data, especially for low-resource languages.",
"Incorporating monolingual data for NAR-MT is straightforward in our setup.",
"Given an AR model that we want to approximate, we obtain the source-side monolingual text and use the AR model to generate the targets that we can train our NAR model on.",
"Data We evaluate NAR-MT training on both the WMT16 En-Ro (around 610k sentence pairs) and the WMT14 En-De (around 4.5M sentence pairs) parallel corpora along with the associated WMT",
"monolingual corpora for each language.",
"For the parallel data, we use the processed data from Lee et al. (2018) to be consistent with previous publications.",
"The WMT16 En-Ro task uses newsdev-2016 and newstest-2016 as development and test sets, and the WMT14 En-De task uses newstest-2013 and newstest-2014 as development and test sets.",
"We report all results on test sets.",
"We used the Romanian portion of the News Crawl 2015 corpus and the English portion of the Europarl v7/v8 corpus 2 as monolingual text for our En-Ro experiments, which are both about 4 times larger than the original paired data.",
"We used the News Crawl 2007/2008 corpora for German and English monolingual text 2 in our En-De experiments, and downsampled them to 3 million sentences per language.",
"The data statistics are summarized in Table",
"1. The monolingual data are processed following Lee et al. (2018), which are tokenized and segmented into subword units (Sennrich et al., 2015b).",
"The vocabulary is shared between source and target languages and has 40 k units.",
"We use BLEU to evaluate the translation quality 3 .",
"2 http://www.statmt.org/wmt16/translation-task.html 3 We report tokenized BLEU scores in line with prior work (Lee et al., 2018; Ma et al., 2019), which are case-insensitive for WMT16 En-Ro and case-sensitive for WMT14 En-De in the data provided by Lee et al. (2018).",
"Model Configuration We use the settings for the base transformer configuration in Vaswani et al. (2017) for all the models: 6 layers per stack, 8 attention heads per layer, 512 model dimensions and 2048 hidden dimensions.",
"The AR and NAR model have the same encoder-decoder structure, except for the decoder attention mask and the decoding input for the NAR model as described in Sec. 2.2.",
"Training and Inference We initialize the NAR embedding layer and encoder parameters with the AR model's.",
"The NAR model is trained with the AR model's greedy outputs as targets.",
"We use the Adam optimizer, with batches of size 64k tokens for one gradient update, and the learning rate schedule is the same as the one in Vaswani et al. (2017), where we use 4,000 warm-up steps and the maximum learning rate is around 0.0014.",
"We stop training when there is no further improvement in the last 5 epochs, and training finishes in 30 epochs for AR models and 50 epochs for NAR models, except for the En-De experiments with monolingual data where we train for 35 epochs to roughly match the number of parameter updating steps without using extra monolingual data ( 140 k steps).",
"We average the last 5 checkpoints to obtain the final model.",
"We train the NAR model with cross-entropy loss and label smoothing ( (cid:15) = 0 .",
"1 ).",
"During infer-0.0 0.2 0.4 0.6 0.8 1.0 percentage of monolingual data used 1.8 2.0 2.2 2.4 2.6 2.8 a v e r a g e l o ss o f c o n v e r g e d m o d e l Ro-En train Ro-En test En-Ro train En-Ro test Figure 1: Average loss of the NAR models versus the percentage of monolingual data used during training.",
"ence time, we use length parallel decoding with C = 0 , and evaluate the BLEU scores on the reference sentences.",
"All the models are implemented with MXNet and GluonNLP (Guo et al., 2019).",
"We used 4 NVIDIA V100 GPUs for training, which takes about a day for an AR model and up to a week for an NAR model depending on the data size, and testing is performed on a single GPU.",
"Main Results We present our BLEU scores alongside the scores of other non-iterative methods in Table",
"2. Our baseline results surpass many of the previous results, which we attribute to the way that we initialize the decoding process.",
"Instead of directly copying the source embeddings to the decoder input, we use an interpolated version of the encoder outputs as the decoder input, which allows the encoder to transform the source embeddings into a more usable form.",
"Note that a similar technique is adopted in Wei et al. (2019), but our model structure and optimization are much simpler as we do not have any imitation module for detailed teacher guidance.",
"Our results confirm that the use of monolingual data improves the NAR model's performance.",
"By incorporating all of the monolingual data for the En-Ro NAR-MT task, we see a gain of 0.70 BLEU points for the En Ro direction and 1.40 for the Ro En direction.",
"Similarly, we also see significant gains in the En-De NAR-MT task, with an En Ro Ro En no half all no half all B mono mono mono mono mono mono 0 27.19 +0.65 +0.56 26.62 +1.52 +1.58 1 29.34 +0.63 +0.69 28.81 +1.26 +1.46 2 30.46 +0.34 +0.45 30.18 +1.08 +1.24 3 30.87 +0.37 +0.71 31.24 +0.88 +1.09 4 31.06 +0.45 +0.67 31.92 +0.90 +1.25 5 31.21 +0.53 +0.70 32.06 +1.10 +1.40 6 31.20 +0.39 +0.62 31.98 +1.17 +1.43 7 30.99 +0.43 +0.51 31.85 +1.19 +1.31 gold 29.64 +0.61 +0.85 29.83 +1.42 +1.69 Table 3: BLEU scores on the WMT16 En-Ro test sets for NAR models trained with different numbers of length candidates and amounts of additional monolingual data.",
"By removing the duplicated output tokens as a simple postprocessing step (following Lee et al. (2018)), we achieved 33.57 BLEU for the WMT16 Ro En direction and 25.73 BLEU for the WMT14 En De direction, which are state-of-the-art among non-iterative NAR-MT results.",
"In addition, our work shrinks the gap between the AR teacher and the NAR model to just 0.11 BLEU points in the Ro En direction.",
"Losses in Training and Evaluation To further investigate how much the monolingual data contributes to BLEU improvements, we train En-Ro NAR models with 0%, 25%, 50%, and 100% of the monolingual corpora and plot the cross-entropy loss on the training data and the testing data for the converged model.",
"In Figure 1, when no monolingual data is used, the training loss typically converges to a lower point compared to the loss on the testing set, which is not the case for the AR model where the validation and testing losses are usually lower than the training loss.",
"This indicates that the NAR model overfits to the training data, which hinders its generalization ability.",
"However, as more monolingual data is added to the training recipe, the overfitting problem is reduced and the gap between the evaluation and training losses shrinks.",
"Effect of Length-Parallel Decoding To test how the NAR model performance and the monolingual gains are affected by the number of decoding length candidates, we vary the half-width B (Sec. 2.3) across a range of values and test the NAR models trained with 0%, 50%, and 100% of the monolingual data for the En-Ro task (Table 3).",
"The table shows that having multiple length candidates can increase the BLEU score significantly and can be better than using the gold target length, but having too many length candidates can hurt the performance and slow down decoding (in our case, the optimal B is 5).",
"Nonetheless, for every value of B , the BLEU score consistently increases when monolingual data is used, and more data brings greater gains.",
"BLEU under Different Sentence Lengths In Table 4, we present the BLEU scores on WMT16 Ro En test sentences grouped by source sentence lengths.",
"We can see that the baseline NAR model's performance drops quickly as sentence length increases, whereas the NAR model trained with monolingual data degrades less over longer sentences, which demonstrates that external monolingual data improves the NAR model's generalization ability.",
"We found that monolingual data augmentation reduces overfitting and improves the translation quality of NAR-MT models.",
"We note that the monolingual corpora are derived from domains which may be different from those of the parallel training data or evaluation sets, and a mismatch can affect NAR translation performance.",
"Other work in NMT has examined this issue in the context of backtranslation (e.g., Edunov et al. (2018)), and we expect the conclusions to be similar in the NAR-MT case.",
"There are several open questions to investigate: Are the benefits of monolingual data orthogonal to other techniques like iterative refinement?",
"Can the NAR model perfectly recover the AR model's performance with much larger monolingual datasets?",
"Are the observed improvements language-dependent?",
"We will consider these research directions in future work."
] | [
"abstain",
"abstain",
"objective",
"result",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"result",
"result",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method"
] |
[
"Knowing the Most Frequent Sense (MFS) of a word has been proved to help Word Sense Disambiguation (WSD) models significantly.",
"However, the scarcity of sense-annotated data makes it difficult to induce a reliable and high-coverage distribution of the meanings in a language vocabulary.",
"To address this issue, in this paper we present CluBERT, an automatic and multilingual approach for inducing the distributions of word senses from a corpus of raw sentences.",
"Our experiments show that CluBERT learns distributions over English senses that are of higher quality than those extracted by alternative approaches.",
"When used to induce the MFS of a lemma, CluBERT attains state-of-the-art results on the English Word Sense Disambiguation tasks and helps to improve the disambiguation performance of two off-the-shelf WSD models.",
"Moreover, our distributions also prove to be effective in other languages, beating all their alternatives for computing the MFS on the multilingual WSD tasks.",
"We release our sense distributions in five different languages at https://github.",
"com/SapienzaNLP/clubert .",
"Word Sense Disambiguation (WSD) is the task of associating a word in context with a meaning from a given inventory of senses (Navigli, 2009).",
"It resides at the core of Natural Language Processing and has been proved to be beneficial to different downstream tasks, e.g., Information Extraction (Delli Bovi et al., 2015) and Machine Translation (Pu et al., 2018).",
"Current approaches to WSD can mainly be divided into supervised and knowledge-based methods.",
"While the former leverage manually-annotated data to train statistical models, the latter exploit the knowledge enclosed within a semantic network to identify the most appropriate meaning of a word in context.",
"Both kinds of approach, however, suffer from the knowledge acquisition bottleneck problem (Gale et al., 1992; Pasini, 2020).",
"In fact, since words and senses follow a Zipfian distribution (McCarthy et al., 2004a), information on rare words and meanings is scarce in both semantically-annotated data and knowledge bases.",
"This undermines the ability of supervised and knowledge-based approaches to deal with words unseen at training time, or that have only a few connections within a semantic network.",
"To overcome this limitation, the Most Frequent Sense (MFS) backoff strategy, i.e., tagging a word with its meaning that has been manually annotated as the most frequent one, is employed by both approaches.",
"Nevertheless, while the MFS proved to be a strong baseline in the general-domain setting of WSD, it does not scale over specific domains (Pasini and Navigli, 2020) and its applicability is limited to languages where annotated data are available, i.e., English.",
"Furthermore, the way words and meanings are used changes over time, hence making old annotations unreliable.",
"This is the case with WordNet (Miller et al., 1990), i.e., the most used electronic English dictionary in WSD.",
"WordNet provides information about sense frequency that is either manually-annotated or derived from SemCor (Miller et al., 1993), i.e., a corpus where words are manually tagged with WordNet meanings.",
"However, neither WordNet nor SemCor have been updated in the past 10 years, thus making their information about sense frequency outdated.",
"For example, the WordNet most frequent sense for the noun pipe is its smoking device meaning, although, nowadays, one would expect the metal pipe sense to appear more often in general.",
"To overcome some of the aforementioned limitations, different approaches to automatically extracting the distribution of senses have been proposed (Pasini and Navigli, 2018; Hauer et al., 2019).",
"However, these fail to match the WordNet MFS performance and are either dependent on bilingual corpora (Hauer et al., 2019), or limited to nouns only (Pasini and Navigli, 2018).",
"In this paper, we present CluBERT, a multilingual cluster-based approach that automatically induces the distribution of word senses from a corpus of raw sentences without relying on manually-annotated data.",
"By exploiting the assumption that similar meanings appear in similar contexts (Reif et al., 2019) and the representational power of BERT (Devlin et al., 2019), CluBERT can learn distributions that are of better quality according to both intrinsic and extrinsic evaluation than those extracted either by its competitors, or from manually-curated resources.",
"Furthermore, our approach outperforms its alternatives in all multilingual and most domain-specific WSD test sets.",
"Finally, when used as backoff strategy of a WSD architecture, our automatically-induced distributions are shown to lead the underlying model to higher results than when using the standard manually-curated distributions of WordNet, hence placing themselves as a better and more flexible alternative.",
"Word Sense Disambiguation (WSD) is a longstanding problem in Natural Language Processing which was first formulated to address the ambiguity of words in the context of Machine Translation (Weaver, 1949).",
"Nowadays, WSD models can be mainly divided in two groups: knowledge-based and supervised.",
"Knowledge-based methods (Agirre et al., 2014; Moro et al., 2014; Tripodi and Pelillo, 2015) rely on the information enclosed within a semantic network such as WordNet (Miller et al., 1990), a manually-curated resource organised in a graph structure where nodes are concepts and edges are semantic relations between them, or BabelNet (Navigli and Ponzetto, 2010, 2012), a large multilingual knowledge base where synsets are lexicalised in more than 250 languages.",
"Since knowledge-based approaches do not rely on semantically-annotated corpora, they can easily scale over different languages as long as their underlying semantic network supports them (Scarlini et al., 2020; Maru et al., 2019; Scozzafava et al., 2020).",
"Nevertheless, these approaches struggle to remain competitive on English when compared to supervised methods.",
"Supervised approaches, instead, take advantage of sense-annotated data and frame the WSD task as a classification problem, where each word has its own set of labels, i.e., its possible meanings according to a given sense inventory.",
"Ranging from word-based approaches, where a single SVM classi-fier is specialised in disambiguating only one word in a sentence (Zhong and Ng, 2010; Iacobacci et al., 2016; Yuan et al., 2016), to more general neural architectures that classify all the words together (Ra-ganato et al., 2017a; Vial et al., 2019; Hadiwinoto et al., 2019; Bevilacqua and Navigli, 2020), supervised methods have proved to outperform their knowledge-based counterparts whenever annotated data are available (Scarlini et al., 2019).",
"Despite the progress and the increment in the overall performance, both kinds of approach still rely, most of the time, on the Most Frequent Sense heuristic whenever a word does not appear tagged in the training set, or the confidence score of its disambiguation is lower than a threshold.",
"The MFS baseline, in fact, has proved to be very competitive (McCarthy et al., 2004a), yet, it is limited to words and senses comprised in a manually-annotated corpus such as SemCor (Miller et al., 1993).",
"To cope with this limitation, several works have been proposed over the years to automatically learn the Most Frequent Sense of a word.",
"A seminal work in this direction was that of McCarthy et al. (2004b), where a thesaurus and the distributional similarity between words were used to find the predominant meaning of a given lemma.",
"More recent works, instead, have focused on inducing the full distribution over the senses of a given word.",
"Bennett et al. (2016) exploited topic modelling techniques, whereas Pasini and Navigli (2018) presented two multilingual approaches that provided full distributions over nominal senses, not only for English, but also for words in other languages.",
"The work we propose in this paper stands out from previous approaches, exploiting for the first time, to the best of our knowledge, BERT contextualized embeddings together with a knowledge-based WSD model to compute the distribution of word meanings.",
"Our approach is not tied to any specific language and can potentially be applied to all languages supported by both BERT ( 104 ) and BabelNet (more than 280 ).",
"In this Section, we present CluBERT, a multilingual approach for computing the distribution of",
"word senses from a corpus of raw sentences.",
"Our approach takes as input a corpus C and a target lexeme l 1 and exploits BERT 2 , i.e., a pretrained language model, and BabelNet, i.e., a multilingual knowledge base.",
"We also define the set of possible meanings M l for the lexeme l as the set of all the synsets 3 , i.e., sets of synonyms, in BabelNet which have l among their lexicalizations.",
"CluBERT extracts the sense distribution for l by applying the following three steps:",
"1. Sentence Clustering , which clusters together the sentences of C in which l appears based on the similarity of their contexts 4 .",
"2. Cluster Disambiguation , which assigns to each cluster a distribution over the possible meanings of l in BabelNet by exploiting the context provided by the cluster itself.",
"3. Distribution Extraction , which, given the distributions computed in the previous step, finally derives the general distribution of the senses of l across the corpus C .",
"The first step relies on the assumption that different senses of l tend to appear in different contexts and vice versa.",
"Therefore, since BERT has been shown to capture the subtle distinctions between different meanings of the same word (Reif et al., 2019), we employ it to compute the representations of l across different sentences.",
"We thus cluster BERT embeddings in order to group together the occurrences of 1 A lemma with a specific Part-Of-Speech tag.",
"2 Across all the experiments we used the multilingual model of BERT, i.e., bert-base-multilingual-cased.",
"3 We use sense and synset interchangeably.",
"4 As representation for a sentence containing l we use the contextualized representation of l .",
"l that appear in similar contexts and are hence likely to express the same meaning.",
"More in detail, we iterate over all the sentences in S l C , i.e., those sentences in C where l appears, and project them in a latent space by means of BERT.",
"We thereby represent the sentence S l as v l = BERT ( , l ) , i.e., the representation of l in the sentence computed by BERT.",
"Once all the sentences in S l are associated with a vector, we group contextually-similar sentences together by leveraging the k -means algorithm (Lloyd, 2006).",
"K -means, in fact, creates internally-cohesive clusters that partition S l into k disjoint groups.",
"For example, in Table 1 we show an excerpt of two clusters extracted for the lexeme glass n 5 .",
"As one can see, the sentences in each set identify the semantics of the target word, with the upper cluster grouping all sentences related to the material meaning of glass n and the bottom one all those related to its container sense.",
"We note that no induction of senses is performed at any stage of our approach.",
"At the end of this step, the target lexeme l is associated with the set of its clusters U l .",
"The second step computes, for each cluster c of the lexeme l , a distribution over the possible senses of l that is specific to c .",
"To this end, by exploiting the lexical context of c , we build its weighted Bag-of-Words representation and use it to compute the cluster-level distribution over the senses in M l .",
"BoW construction We are now interested in finding which of the senses of l best suits the context provided by the sentences in c .",
"To this end, we extract the Bag of Words of c BoW c by considering all the content words in c .",
"BoW c , in fact, conflates 5 We use the lemma POS notation.",
"the contextual information of all the sentences in c in a list of unique words ranked by their frequency within the cluster.",
"We refine BoW c by retaining only its top n most frequent words, hence filtering out those that are less informative for determining the most suitable meaning of l in c and the stop-words.",
"To showcase the outcome of this step, in Table 2 we report the three most frequent words in the BoW for two clusters of glass n (top part) along with two excluded words (bottom part).",
"As one can see, the topmost words provide a precise characterization of the semantics of the clusters.",
"Cluster-Level Sense Distribution We now proceed by computing the probability of l expressing a given sense s M l within a cluster c .",
"To this end, we rank the synsets of l according to their relevance in the BabelNet semantic network with respect to a given set of nodes M BoW c = (cid:83) l (cid:48) BoW c M l (cid:48) , i.e., the set of all the possible meanings of the words in BoW c .",
"Thus, we follow Agirre et al. (2014) and employ the PageRank algorithm in its personalised version (Haveliwala et al., 2002, PPR), which computes the probability of reaching a node in the graph when starting from a fixed set of nodes.",
"Formally, we calculate the score of each synset in BabelNet as follows: v ( t +1) = (1 ) v (0) + Av ( t ) where A is the row-normalised adjacency matrix of the knowledge base, v (0) is the restart probability distribution, which is zero in every component except for those corresponding to the nodes in M BoW c , and is the well-known damping fac-tor which we set to 0 .",
"85 .",
"We further exploit the contexts in BoW c by weighting each synset s M BoW c by the sum of the frequencies of its lexicalizations that appear in BoW c .",
"Finally, after n iterations of the PPR algorithm, we extract the scores for each s M l from v n and normalise them to build the cluster-level sense distribution d cl for the lemma l in the cluster c .",
"As shown in Figure 1, the two clusters of glass n are now associated with two different distributions over glass n ' meanings in BabelNet, i.e., the container sense and the material sense.",
"In this last step, we compute the overall sense distribution of l with respect to the input corpus C",
"To this end, we leverage the cluster-level distributions and the clusters' sizes to compute the overall distribution over the senses of l as follows: d l = (cid:80) c U l | c | d cl (cid:80) c U l | c | where d cl is the vector representing the distribution over l 's synsets in the cluster c and U l is the set of clusters of l .",
"For example, considering the clusters depicted in Figure 1 and their distributions 6 , we associate the lexeme glass n with the distribution d glass n = { glass 1 n : 0 .",
"34 , glass 2 n : 0 .",
"66 } where glass 1 n is the sense 1 of glass n in BabelNet.",
"We repeat these steps for each lemma of interest to derive the distribution over its senses in BabelNet.",
"We now present a battery of experiments to assess the quality of our induced sense distributions on both intrinsic and extrinsic evaluation tasks.",
"First, we set the parameters of the model, namely, the sense inventory, the corpus, the number of words to retain in each Bag of Words, and the number of clusters to create for each lemma.",
"Then, we evaluate our automatically-induced distributions intrinsically, by computing their distance in comparison to a manually-annotated distribution, and extrinsically, on the standard English and multilingual Word Sense Disambiguation tasks.",
"6 We consider | CLUSTER 1 | = 50 and | CLUSTER 2 | = 100",
"Wikipedia 7 since it is freely available and covers more than 300 languages and most of the semantic domains.",
"As regards the number of clusters for a given lemma l , we set the parameter k of the k means algorithm to the number of l 's meanings in BabelNet.",
"Finally, we tune the number of words n to retain within each cluster's Bag of Words by manually evaluating the quality of the disambiguation step (see Section 3.2) when varying n between 5 and 20 with a 5 step and set n = 5 .",
"We compute the distributions for all the lemmas in English, Italian, Spanish, French and German which have at least one corresponding synset within the sense inventory.",
"Comparison Systems We compare CluBERT with the most recent and best-performing automatic and manual approaches for sense-distribution learning and MFS detection.",
"As regards the automatic methods for inducing sense distributions, we consider the two knowledge-based and multilingual approaches proposed by Pasini and Navigli (2018), i.e., EnDi and DaD, and the topic modelling-based approach proposed by Bennett et al. (2016), i.e., LexSemTM.",
"We also compare against three other approaches specialised in identifying the MFS of a word, namely, COMP2SENSE (Hauer et al., 2019), which exploits the distance between a word and a sense in a knowledge base, and WCT-VEC (Hauer et al., 2019) and UMFS-WE (Bhingardive et al., 2015), which, instead, leverage the distance between words and sense embeddings.",
"As for the manually-annotated competitors, we compare against the sense distributions and the MFS of WordNet (Miller et al., 1990).",
"These are both determined by the frequency of the senses in SemCor (Miller et al., 1993), when possible, and by manual annotations of the synsets' ranks, otherwise.",
"Concerning the multilingual evaluation, instead, we compare CluBERT with EnDi, DaD and the BabelNet MFS, which computes the MFS for a given lemma by taking its highest ranked sense according to BabelNet.",
"In this Section we estimate the quality of our automatically-induced sense distributions by comparing them to gold standard ones.",
"We use the dataset proposed by Bennett et al. (2016) which, 7 We used the June 2019 dump.",
"contains 50 distinct lemmas annotated with a gold distribution over their senses.",
"Hence, we compare the distributions for the target lemmas induced by CluBERT and its competitors with the manually-annotated ones.",
"In order to compare two distributions, we use two measures: the Jensen-Shannon divergence (JSD) and the Weighted Overlap (WO) (Pilehvar et al., 2013).",
"With both metrics, we average all the pairwise similarity between the gold distributions and the ones induced by the systems under comparison.",
"Jensen-Shannon Divergence The JSD computes a real value expressing the similarity between the two input distributions, which is 0 when they are identical, and higher than 0 otherwise.",
"Formally, given two input distributions d and d (cid:48) , the Jensen-Shannon divergence is defined as follows: JSD ( d, d (cid:48) ) = D ( d, M ) + D ( d (cid:48) , M ) 2 D ( d, d (cid:48) ) = (cid:88) s d ( s ) log (cid:18) d ( s ) d (cid:48) ( s ) (cid:19) where M = d + d (cid:48) 2 and D is the Kullback-Leibler divergence function in which d ( s ) is the value of the component corresponding to the synset s in the distribution d .",
"Weighted Overlap The WO measure computes the similarity between two input distributions by harmonically averaging the ranks of the distribu-tions' components when sorted according to their probabilities.",
"Its output value is 1 when the two inputs are identical, and 0 otherwise.",
"Formally, let d and d (cid:48) be two input distributions, their Weighted Overlap is computed as follows: W O ( d, d (cid:48) ) = | O | (cid:88) i =1 ( r i + r (cid:48) i ) 1 (2 i ) 1 where O is the set of common components between the input distributions and r i and r (cid:48) i are the ranks of the i -th component in d and d (cid:48) , respectively.",
"We now report the results of CluBERT and its competitors in terms of JSD and WO in comparison to the gold distributions provided by Bennett et al. (2016).",
"As one can see from Table 3, CluBERT Method JSD ( ) WO ( ) CluBERT 0.085 0.958 EnDi 0.099 0.937 DaD 0.204 0.902 LexSemTM 0.116 0.932 WordNet 0.255 0.837 Table 3: Similarity scores on the Bennett et al. (2016) gold standard in terms of JSD (the lower the better) and Weighted Overlap (the higher the better).",
"is the approach that better resembles the human-annotated distributions, in terms of both JSD and WO, achieving 0 .",
"085 and 0 .",
"958 , respectively, and outperforming the previous state of the art on this dataset, i.e., EnDi.",
"Interestingly enough, WordNet is the worst approach across the board scoring more than 0 .",
"1 worse than CluBERT on both evaluation measures.",
"We attribute these modest results to the fact that WordNet draws its distribution from annotations that are not up to date.",
"Furthermore, we note that CluBERT results are statistically-significant ( p < 0 . 1 ) when compared to the best competitor systems, i.e., EnDi, on both evaluation measures.",
"By manually inspecting the induced distributions that were most different to the gold ones, we note that the vast majority of CluBERT errors are due to the lack of senses for named entities in our inventory.",
"Indeed, many nouns that are commonly associated with objects or abstract meanings are also used for named entities, e.g., the lexeme flora n , which is commonly used to indicate either the living organism meaning, or the plant life of a region meaning, it is often used in compound nouns used to refer to named entities, such as F.C. Flora 8 , William Flora 9 , etc.",
"These occurrences are therefore considered by CluBERT, which, despite being able to cluster them correctly, fails to disambiguate the group containing named entities owing to the fact that the correct meaning is not available within the sense inventory.",
"As a result, most of the clusters where flora n appears as named entity are disambiguated with the living organism meaning, thereby 8 https://en.wikipedia.org/wiki/FC_ Flora 9 https://en.wikipedia.org/wiki/ William_Flora contributing to wrongly steering the sense distribution towards this meaning.",
"Since most of the errors are of this kind, better handling of named entities or the use of a larger sense inventory could further improve the performance of CluBERT.",
"In this Section we evaluate CluBERT's distributions on the English, domain-specific and multilingual all-words WSD tasks.",
"To this end, we leverage the sense distributions to extract a lemma's Most Frequent Sense (MFS), which is then used to annotate each occurrence of the lemma in the test sets.",
"In addition, we also integrate CluBERT MFS into two off-the-shelf WSD models and measure its impact.",
"Evaluation Datasets We consider all the standard English all-words WSD test sets contained in the framework presented by Raganato et al. (2017b), i.e., Senseval-2 (Edmonds and Cotton, 2001), Senseval-3 (Snyder and Palmer, 2004), SemEval-2007 (Pradhan et al., 2007), SemEval-2013 (Navigli et al., 2013), SemEval-2015 (Moro and Navigli, 2015) and ALL, i.e., the concatenation of all the previous datasets.",
"As regards the domain-specific evaluation we consider the 6 and 3 domains in SemEval-2013 and SemEval-2015, respectively, and test on each of them separately.",
"As for the multilingual evaluation, instead, we test on the Italian, Spanish, French and German datasets of SemEval-2013 and the Italian and Spanish test sets of SemEval-2015.",
"We note that both datasets make use of old versions of BabelNet (version 1.1.1 and 2.5, respec-tively).",
"For this reason, previous works used an in-house mapping between BabelNet versions to make them up to date.",
"However, in this process, several gold instances were lost making the datasets smaller than the original ones.",
"To be fair with other approaches, we compare CluBERT against them on the same datasets on which they tested.",
"Moreover, to encourage future comparisons, we also report CluBERT's performance on the newer versions of both gold standards made available by the Sapienza NLP group at https://github.",
"com/SapienzaNLP/mwsd-datasets , which comprise more instances than the older datasets and feature the latest version of BabelNet (4.0.1) 10 .",
"As 10 We used the WordNet split as we can only provide senses within the WordNet part of BabelNet.",
"a term of comparison, we also report the results of the BabelNet MFS on these datasets.",
"In what follows, we refer to the older versions of the multilingual tasks of SemEval-2013 and SemEval-2015 by juxtaposing the * symbol (SemEval-2013* and SemEval-2015*).",
"On all the aforementioned datasets we report the results in terms of F1, i.e., the harmonic mean of precision and recall.",
"Most Frequent Sense Strategy We extract the MFS of a target lemma l from its sense distribution d l by taking the synset with the highest probability, i.e., MF S ( l ) = argmax ( d l ) .",
"Therefore, we use the MFS of a lemma computed according to each system under comparison to tag all of l 's occurrences within the test sets.",
"Domain-Specific WSD Setup To assess the ability of CluBERT to scale over different domains and hence to extract a distribution that is skewed towards the topic of the input corpus, we build 8 distinct domain-specific corpora, one for each domain of SemEval-2013 and SemEval-2015's English datasets.",
"For this purpose, we exploit the 34 domain labels (Camacho-Collados and Navigli, 2017) available in BabelNet together with the mapping between synsets and Wikipedia pages to retrieve those pages that are peculiar to a specific domain, hence building a corpus C dom specific for the domain dom .",
"We then apply CluBERT, EnDi, DaD and LexSemTM on C dom and extract their respective MFS specific for each domain 11 .",
"Downstream Task Setup Finally, we test the benefits brought by CluBERT's distributions by including them in a knowledge-based and a supervised approach, namely: UKB 12 (Agirre et al., 2014): an off-the-shelf state-of-the-art knowledge-based WSD model based on the Personalised PageRank algorithm.",
"When provided, it makes use of the given sense distribution to bias its answers towards the MFS.",
"BiLSTM (Raganato et al., 2017a): an end-to-end neural sequence model which employs two bidirectional LSTM layers and an attention mechanism trained on multiple tasks, i.e., fineand coarse-grained WSD and Part-of-Speech tagging.",
"When provided, it makes use of the MFS backoff strategy whenever it comes to disambiguating a lemma unseen during training.",
"We compare these two models, firstly, when no prior knowledge is supplied, and then, when WordNet (UKBWN , BiLSTM WN ) and CluBERT (UKB CluBERT , BiLSTM CluBERT ) distributions are provided.",
"As one can see from Table 4, CluBERT attains the highest scores across the board, outperforming all the other automatic approaches by more than 10 F1 points.",
"More interestingly, CluBERT surpasses the hitherto unbeaten manual baseline of WordNet by 11 We do not compare against UMFS-WE, WCT-VEC and COMP2SENSE inasmuch as code and data are not available.",
"12 Version 3.2 available at http://ixa2.si.ehu.es/ ukb/ SemEval-2013 SemEval-2015 Method Biology Climate Finance Politics Social Issue Sport Math&Computer Biomedicine Social Issue CluBERT 72.9 70.9 69.0 79.2 70.9 61.4 52.3 77.3 75.2 DaD 79.0 63.0 64.0 67.0 68.0 54.0 59.8 63.9 54.3 EnDi 71.0 53.0 60.0 62.0 63.0 57.0 63.0 63.0 55.9 LexSemTM 56.0 47.0 49.0 51.0 52.0 34.0 47.7 63.0 40.7 WordNet MFS 61.0 59.0 52.0 64.0 58.0 56.0 47.2 67.8 62.4 Table 6: MFS performance in terms of F1 on the nominal instances of the different domains in the SemEval-2013 (Navigli et al., 2013) and SemEval-2015 test sets (Moro and Navigli, 2015).",
"a statistically-significant 13 difference (McNemar, 1947) of almost 2 F1 points on the ALL dataset.",
"In order to set a level playing field with EnDi and DaD, which cover nouns only, we also carried out our evaluation on the ALL dataset focusing on its nominal instances.",
"As shown in Table 5, CluBERT attains an F1 score of 70 .",
"6 , surpassing the best automatic competitor, i.e., DaD, by more than 4 F1 points.",
"More importantly, our induced distributions also outperform the well-known WordNet MFS strategy by 2 .",
"6 F1 points in this setting too.",
"This demonstrates that CluBERT's distributions are of higher quality than those induced by any of the other automatic and manual competitors.",
"We now focus on testing our distributions on the domain-specific documents available in the SemEval-2013 and SemEval-2015 WSD test sets.",
"As shown in Table 6, CluBERT outperforms all the other competitors on 7 out of the 9 domains by several points, falling behind DaD on the Biology domain and behind EnDi on the Math&Computer one.",
"This is mainly due to the fact that the senses in these two domains are poorly connected in BabelNet, hence making them hard to reach when applying the PPR algorithm (see Section 3.2).",
"DaD, which also exploits the BabelNet graph, seems to 13 2 test for statistical significance with p < 0 .",
"05 .",
"be more robust to this event inasmuch as it relies directly on the connections between domains and synsets and not only on those between words and concepts, as CluBERT does.",
"Nevertheless, when the senses of the target domain are well framed within the semantic network, our approach proves to be able to induce a distribution that accurately reflects the way the meanings of a word are spread within the input corpus.",
"In fact, CluBERT achieves the best results on all the other domains, with the highest improvement of 12 .",
"2 F1 points over the current state of the art on the Politics domain of SemEval-2013.",
"WordNet, instead, shows poor performance in this setting, too.",
"In fact, its MFS information is designed to work on a general domain setting and it cannot be customised easily for other scenarios.",
"All these results further corroborate our findings in the intrinsic evaluation, and they highlight the fact that WordNet distributions no longer reflect the way senses are spread across a corpus.",
"We now investigate the capabilities of CluBERT to scale over different languages by evaluating it on the multilingual Word Sense Disambiguation tasks of SemEval-2013* and SemEval-2015*.",
"As can be seen from Table 7, the differences in results between CluBERT and the other systems under comparison remain consistent with those reported for English.",
"Our approach, in fact, achieves on average a significant improvement of approximately 9 F1 points over the existing state of the art.",
"This demonstrates that CluBERT makes efficient use of its two complementary resources, i.e., BabelNet and BERT, in this way making up for the paucity of data in non-English languages.",
"Conversely, EnDi and DaD suffer from this shortcoming and perform either poorly (EnDi), or not consistently across languages (DaD).",
"As for the performance on the newer versions of the datasets (Table 8), we note SemEval-2013 SemEval-2015 Method IT ES DE FR IT ES CluBERT 66.6 69.5 72.3 62.3 62.8 61.5 BabelNet MFS 53.2 60.3 76.6 60.0 54.2 50.1 Table 8: MFS F1 scores on all instances of the WordNet split of SemEval-2013 and SemEval-2015 multilingual datasets mapped to the latest BabelNet version (4.0.1).",
"that CluBERT outperforms the BabelNet MFS on all languages but German.",
"The drop in performance on SemEval-2015 when compared to the older version of the dataset, is mainly due to the fact that the datasets now also include all the non-nominal instances which were excluded before to be fair with the other competitors.",
"As for future comparisons, we highly encourage the community to consider the results in Table 8 for CluBERT as they are computed on larger and more updated versions of the datasets.",
"Finally, we assess CluBERT MFS effectiveness when used as backoff strategy in two off-the-shelf WSD approaches, i.e., UKB and the BiLSTM with attention model presented by Raganato et al. (2017b) (see Section 6).",
"In Table 9 we report the performance of the two models without MFS, with WordNet MFS and with CluBERT MFS on the ALL WSD dataset.",
"As one can see, not only does our MFS provide a large boost of 4 .",
"6 and 5 .",
"2 F1 points when compared with the base models without backoff strategy, but it also leads the two systems to attain better performance than when using the WordNet MFS.",
"This strengthens our previous findings and crowns CluBERT as the best backoff strategy compared to all its alternatives.",
"These results open up to new scenarios where the CluBERT MFS might be preferred as backoff strategy for WSD models to the well-established WordNet MFS.",
"In fact, CluBERT attains higher results than WordNet on several WSD datasets, while at the same time assuring greater flexibility.",
"In fact, whereas WordNet MFS is static, CluBERT can be run on different corpora and can therefore adapt the sense distributions to various circumstances and different languages.",
"In this paper we presented CluBERT, an automatic multilingual approach which induces the distribution of word senses in an arbitrary input corpus by exploiting the contextual information coming from BERT and the lexical-semantic knowledge available in BabelNet.",
"CluBERT attains state-of-the-art results on both intrinsic and extrinsic evaluations, also beating the widely-used and manually-curated WordNet MFS.",
"When considering input corpora that come from specific domains, CluBERT showed an unmatched nimbleness in shaping the distributions accordingly, hence outperforming its manual and automatic competitors on most domains.",
"Similarly, our approach demonstrated its ability to scale well on different languages, attaining state-of-the-art results on the multilingual WSD tasks.",
"Finally, when injecting CluBERT MFS into off-the-shelf WSD models, we showed that it brings greater benefits than the WordNet MFS.",
"We release the sense distributions in five different languages at https://github.com/SapienzaNLP/clubert .",
"As future work, we plan to refine our approach by exploiting other strategies for weighting the words in the clusters and to leverage them for automatically building multilingual sense-tagged corpora.",
"The authors wish to greatly thank Claudio Delli Bovi for his comments, suggestions and late-night discussions on the manuscript.",
"The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 under the European Union's Horizon 2020 research and innovation programme."
] | [
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"result",
"other",
"method",
"other",
"other"
] |
[
"Training a task-oriented dialogue agent with reinforcement learning is prohibitively expensive since it requires a large volume of interactions with users.",
"Human demonstrations can be used to accelerate learning progress.",
"However, how to effectively leverage demonstrations to learn dialogue policy remains less explored.",
"In this paper, we present that efficiently learns dialogue policy from demonstrations through policy shaping and reward shaping.",
"We use an imitation model to distill knowledge from demonstrations, based on which policy shaping estimates feedback on how the agent should act in policy space.",
"Reward shaping is then incorporated to bonus state-actions similar to demonstrations explicitly in value space encouraging better exploration.",
"The effectiveness of the proposed S 2 Agent is demonstrated in three dialogue domains and a challenging domain adaptation task with both user simulator evaluation and human evaluation.",
"With the flourishment of conversational assistants in daily life (like Google Assistant, Amazon Alexa, Apple Siri, and Microsoft Cortana), task-oriented dialogues that are able to serve users on certain tasks have increasingly attracted research efforts.",
"Dialogue policy optimization is one of the most critical tasks of dialogue modeling.",
"One of the most straightforward approaches is the rule-based method, which contains a set of expert-defined rules for dialogue modeling.",
"Though rule-based dialogue systems have a reasonable performance in some scenarios, handcrafting such kinds of rules is time-consuming and not scalable.",
"et al., 2018; Peng et al., 2017).",
"It has shown great potentials of using the RL-based method for building robust dialogue systems automatically.",
"However, due to its interactive nature, RL-based agents demand of an environment to operate in.",
"As illustrated in Figure 1, RL-based dialogue agents need to interact with human users and update its policy in an online fashion requiring that the agents have a good online performance from the start of training.",
"In addition, one of the biggest challenges of RL approaches is reward sparsity issue, which leads to exploration in large action space inefficient.",
"As a consequence, training RL-based agents expects a prohibitively large number of interactions to achieve acceptable performance, which may incur a significant amount of expense (Pietquin et al., 2011; Lipton et al., 2016; Peng et al., 2018b).",
"Several attempts are made to improve learning efficiency and tackle reward sparsity issues.",
"Different types of heuristics has been proposed in the form of intrinsic rewards to guide exploration more efficiently (Lipton et al., 2016; Mohamed and Rezende, 2015; Peng et al., 2017, 2018a; Takanobu et al., 2019).",
"When building a dialogue system, it is typically affordable to recruit experts to gather some demonstrations about the expected agent behaviors.",
"We therefore aim to address the aforementioned challenges from a different perspective and assume having access to human-provided demonstrations.",
"In this paper, we investigate how to efficiently leverage these demonstrations to alleviate reward sparsity and improve policy learning quality.",
"Previous work (Lipton et al., 2016) used a simple technique termed as Replay Buffer Spiking (RBS) to pre-fill experience replay buffer with human demonstrations, which yields good performance, especially in the beginning.",
"(Hester et al., 2018) proposed Deep Q-learning from Demonstrations (DQfD) that combines temporal difference updates with a supervised classification loss of actions in demonstrations to improve learning efficiency in gaming domains.",
"However, whether it is feasible and how to effectively leverage human demonstration in dialogue scenarios are less explored.",
"Hence, in this paper, we propose a new strategy of leveraging human demonstrations to learn dialogue policy efficiently.",
"Our dialogue agent, termed as S 2 Agent 1 , learns dialogue policy from demonstrations trough policy shaping and reward shaping .",
"Policy shaping (Griffith et al., 2013) is an approach to incorporating human feedback to advise how policy should behave like experts.",
"It estimates feedback of a state-action pair from human demonstrations and then utilizes the feedback to reconcile the policy from any RL-based agents.",
"This method speeds up learning progress in gaming domains but has not yet been studied in dialogue.",
"However, directly applying policy shaping to dialogue faces several challenges.",
"The original policy shaping uses a tabular analogous method to estimate feedback.",
"This method limits its feasibility for complex problems like dialogue that has large state action representations.",
"To deal with this issue, we propose to use deep neural networks, which represent state-action space with function approximation and distill knowledge from human demonstrations, to estimate feedback.",
"In addition, policy shaping calibrates agents' behavior in policy space, and it is inherently not designed to tackle reward sparsity issues.",
"Considering this, we further introduce reward shaping to bonus these state-action pairs that are similar to demonstrations.",
"It can be viewed as a shaping mechanism explicitly in value space to guide policy exploration towards actions which human experts likely conduct.",
"Our contributions in this work are two-fold: We propose a novel S 2 Agent that can effectively leverage human demonstrations to improve learning efficiency and quality through policy shaping and reward shaping.",
"We experimentally show that S 2 Agent can efficiently learn good policy with limited demonstrations on three single domain dialogue tasks and a challenging domain adaptation task using both simulator and human evaluations.",
"1 Agent with policy S haping and reward S haping 2 Related Work Dialogue policy learning Deep reinforcement learning (RL) methods have shown great potential in building a robust dialog system automatically (Young et al., 2013; Su et al., 2016; Williams et al., 2017; Peng et al., 2017, 2018a,b; Lipton et al., 2018; Li et al., 2020; Lee et al., 2019).",
"However, RL-based approaches are rarely used in real-world applications, for these algorithms often require (too) many experiences for learning due to the sparse and uninformative rewards.",
"A lot of progress is being made towards mitigating this sample complexity problem by incorporating prior knowledge.",
"(Su et al., 2017) utilizes a corpus of demonstration to pre-train the RL-based models for accelerating learning from scratch.",
"(Chen et al., 2017b) attempts to accelerate RL-based agents by introducing extra rewards from a virtual rule-based teacher.",
"However, the method requires extra efforts to design a rule-based dialogue manager.",
"(Hes-ter et al., 2018) improve RL learning by utilizing a combination of demonstration, temporal difference (TD), supervised, and regularization losses.",
"(Chen et al., 2017a) introduced a similar approach called companion teaching to incorporate human teacher feedback into policy learning.",
"Nevertheless, companion teaching assumes that there is a human teacher to directly give a correct action during policy learning process and meanwhile train an action prediction model for reward shaping based on human feedback.",
"Policy shaping Policy Shaping is an algorithm that enables introducing prior knowledge into policy learning.",
"(Griffith et al., 2013) formulates human feedback on the actions from an agent policy as policy feedback and proposes Advise algorithm to estimate humans Bayes feedback policy and combine it with the policy from the agent.",
"It shows significant improvement in two gaming environment.",
"(Misra et al., 2018) uses policy shaping to bias the search procedure towards semantic parses that are more compatible with the text and achieve excellent performance.",
"Reward shaping Reward shaping leverages prior knowledge to provides a learning agent with an extra intermediate reward F in addition to environmental reward r , making the system learn from a composite signal R + F (Ng et al., 1999).",
"However, it is not guaranteed that with reward shaping, an MDP can still have an optimal policy that is PolicyModel Imitation Model User Policy Shaping Human Demonstrations Reward Shaping SupervisedLearning | | g( , ) (,,, ) = + Figure 1: Illustration of the S 2 Agent for dialogue policy learning.",
"identical to the original problem unless the shaping is potential-based reward shaping(Ng et al., 1999; Marthi, 2007).",
"(Su et al., 2015) proposes to use RNNs to predict turn-level rewards and use the predicted reward as informative reward shaping potentials.",
"(Peng et al., 2018a; Takanobu et al., 2019) use inverse reinforcement learning to recover reward functions from demonstrations for reward shaping.",
"However, the estimated reward using these methods inevitably contains noise and failed to conform to potential-based reward function to guarantee the optimal policy.",
"Inspired by (Brys et al., 2015), we directly estimate potential-based reward function from demonstrations.",
"Our S 2 Agent is illustrated in Figure 1, consisting of four modules.",
"1) Dialogue policy model which selects the best next action based on the current dialogue",
"state.; 2) Imitation Model is formulated as a classification task that takes dialogue states as input and predicts associated dialogue action, aiming to distill behaviors from human",
"demonstrations.; 3) Policy Shaping provides feedback on how policy should behave like demonstrations.",
"It then reconciles a final action based on actions from the policy model and imitation model attempting to generate more reliable exploration trajectories; 4) Followed by a reward shaping module that encourages demonstration similar state-actions by providing extra intrinsic reward signals.",
"We consider dialogue policy learning as a Markov Decision Process (MDP) problem and improve the policy with Deep Q-network (DQN) (Mnih",
"et al., 2015).",
"2 In each turn, the agent observes the dialogue state s , and then execute the action a with (cid:15) -greedy exploration that selects a random action with probability (cid:15) or adopts a greedy policy a = argmax a (cid:48) Q ( s, a (cid:48) ; ) , where Q ( s, a (cid:48) ; ) approximates the value function, implemented as a multi-layer perceptron (MLP) parameterized by .",
"The agent then receives the reward r , perceives the next user response to a u , and updates the state to s (cid:48) .",
"The tuple ( s, a, r, s (cid:48) ) is stored in the experience replay D a .",
"This loop continues until the dialogue terminates.",
"The parameters of Q ( s, a (cid:48) ; ) are updated by minimizing the following square loss with stochastic gradient descent: L ( ) = E ( s,a,r,s (cid:48) ) D a [( y i Q ( s, a ; )) 2 ] y i = r + max a (cid:48) Q (cid:48) ( s (cid:48) , a (cid:48) ; (cid:48) ) (1) where [0 , 1] is a discount factor, and Q ( . ) is the target value function that is only periodically updated (line 26 in Algorithm 1).",
"By differentiating the loss function with regard to , we derive the following gradient: L ( ) = E ( s,a,r,s (cid:48) ) D a [( r + max a (cid:48) Q (cid:48) ( s (cid:48) , a (cid:48) ; (cid:48) ) Q ( s, a ; )) Q ( s, a ; )] (2) As shown in lines 25-26 in Algorithm 1, in each iteration, we update Q ( . ) using minibatch Deep Q-learning.",
"We assume having access to a corpus of human-human dialogues either from a log file or provided by recruited experts, which in this paper are termed as human demonstrations D e .",
"D e usually consists of a set of state-action pairs [( s 1 , a 1 ) , ( s 2 , a 2 ) , ..., ( s n , a n )] .",
"Theoretically, if D e is large enough to cover all the possible states, then the agent can respond perfectly by looking up the corresponding action from D e .",
"However, in practice, D e is usually limited and can not cover all the states.",
"Hence, we propose to use a supervised learning model (denoted as Imitation Model) to parameterize the relation of the 2 Our shaping methods are compatible with any policy optimizer.",
"In this paper, we employ DQN due to its simplicity and robustness in training.",
"However, replacing with other methods like Actor-Critic is straightforward.",
"states and actions expecting it to generalize to unseen state.",
"We formulate the task as a classification problem.",
"It takes dialogue s i as input and is trained with cross-entropy to minimize loss between action a i and predicted action a .",
"There are multiple models like RNN, CNN can be used for this purpose, but for simplicity, we choose to use MLP.",
"Incorporating human feedback into RL can accelerate its learning progress (Griffith et al., 2013; Cederborg et al., 2015).",
"Policy shaping is a representative that estimates human's Bayes optimal feedback policy and then combine the feedback policy with the policy of an underlying RL model.",
"The feedback policy is computed with the following equation: e ( a | s ) = C s,a C s,a + (1 C ) s,a (3) where s,a is the difference between the number of positive feedback and negative feedback, i.e. the number of occurrence of ( s, a ) in human demonstrations.",
"C here means the probability of consistency feedback from demonstrations 3 .",
"For example, C = 0 .",
"7 means with 0.7 probability the feedback from the demonstrations is considered reliable.",
"Otherwise, if C = 0 .",
"5 , then policy shaping is meaningless since it treats every action equally.",
"However, s,a is difficult to estimate from the demonstrations in dialogue scenarios since the state and action are large and sparse.",
"To deal with this issue, we propose to use the aforementioned Imitation Model to estimate feedback from demonstrations.",
"Specifically, we samples N times from imitation model policy e ( a | s ) to form a committee a 1 , a 2 , ..., a N denoting N votes.",
"Then we count for each action to generate c a as positive feedback from human demonstrations.",
"We use the expectation of binomial distribution N (1 C ) as the number of negative feedback.",
"Such that, in dialogue, we use: s,a = c a N (1 C ) (4) Finally, the policy is reconciled from the policy model and the imitation model by multiplying them together: ( a | s ) = a ( a | s ) e ( a | s ) (cid:80) a a ( a | s ) e ( a | s ) (5) 3 It is a parameter to control noise in the demonstrations.",
"Policy shaping operates in the policy space and can be viewed as a mechanism of biasing the agent learning towards the policy distilled from the demonstrations to improve learning efficiency.",
"The reconciled policy in equ.",
"5 allows the underlying RL model surpass the imitation model e .",
"Most of the reward functions in dialogue scenarios are usually manually defined.",
"Typically, a -1 for each turn and a significant positive or negative reward indicating the status of the dialogue at the end of a session.",
"Such sparse reward is one of the reasons that RL agents have poor learning efficiency.",
"Initially, the agents are fain to explore state-action uniformly at random.",
"To this end, we propose to use reward shaping to integrate priors into RL learning to alleviate reward sparsity.",
"Reward shaping is a popular method to integrate prior knowledge into reward function to improve policy exploration (Brys et al., 2015).",
"It provides the learning agent with an extra intermediate and task-related reward that enriches the original reward signal: r (cid:48) ( s, a ) = r ( s, a ) + FD ( ) (6) Where FD denotes rewards from demonstrations.",
"However, modifying the reward function may change the original MDPs and make the agent converge to a suboptimal point.",
"(Wiewiora et al., 2003) proved that the MDP keeps unchanged and maintains convergency property if FD ( ) is defined as: FD ( s, a, s (cid:48) , a (cid:48) ) = D ( s (cid:48) , a (cid:48) ) D ( s, a ) (7) where D ( s, a ) is a potential function of state-action pair.",
"Its definition is intuitive.",
"We bonus these policy paths that were consistent with the demonstrations.",
"As such, the value of D ( s, a ) is expected to be high when action a is demonstrated in a state s d similar to s , and if s is completely different from s da , D ( s, a ) should be close to 0.",
"To achieve this, multi-variate Gaussian is used to compute the similarity between state-action pairs.",
"(8) We search through the demonstrations to obtain the sample with highest similarity:",
"Using reward shaping to learn policy has several advantages.",
"It leverages demonstrations to bonus these state-actions that are similar to demonstrations.",
"The reward calculated from reward shaping is more informative and demonstration guided than the human-defined reward, which mitigates the reward sparsity issue to some degree.",
"We evaluate the proposed S 2 Agent with a user simulator on several public task-oriented datasets, including movie ticket booking, restaurant reservation, and taxi reservation.",
"Additionally, to asses the generalization capability of shaping mechanism, We conduct domain adaptation experiments.",
"Finally, human evaluation results are reported.",
"The raw conversation data in the movie ticket booking task are collected through Amazon Mechanical",
"Turk, and the data for the restaurant reservation and taxi calling scenario is provided by Microsoft Dialogue Challenge 4 .",
"The three datasets have been manually labeled based on a schema defined by domain experts.",
"We extend and annotated movie booking task with a payment scenario to simulate the situation of extending the dialogue system with new slots and values.",
"All datasets contain 11 intents.",
"The movie dataset contains 13 slots, and the other three contain 29 slots.",
"Detailed information about the intents and slots is provided in Appendix A table 3.",
"To benchmark the performance of the shaping mechanism, we have developed different versions of task-completion dialogue agents for comparison as follows:",
"Imitation Model (IM) agent is implemented",
"with Multi-Layer Perception and trained with the human demonstrations data to predict actions given dialogue states.",
"DQN agent is learned with Deep Q-Network.",
"EAPC Teaching via Example Action with Predicted Critique (EAPC) introduced in (Chen et al., 2017a) leverages real-time human demonstrations to improve policy learning.",
"EAPC assumes the existence of human teachers during the learning process.",
"It receives example actions from human teachers and, in the meantime, trains an action prediction model with the example actions as a critic for turn-level reward shaping.",
"Since human teachers are not available in our case, we implement EAPC in the absence of teachers but use the same amount of human demonstrations to train a weak action prediction model.",
"If the predicted action is identical to the action given by the policy model, the agent receives an extra positive reward otherwise an extra negative reward.",
"This method can be viewed as a variant of S 2 Agent with only reward shaping using noise reward estimations from the imitation model.",
"loss from human demonstrations to DQN to ensure that the agent predicts correct actions on human demonstrated states.",
"In the early learning phase, DQfD is trained only with the demonstrations to obtain a policy that mimics the human.",
"Then, accumulated experiences mixed with the demonstration are used to train DQfD.",
"S 2 Agent is our proposed agent that is trained with both policy shaping and reward shaping, as described in Algorithm 1.",
"S 2 Agent w/o rs is a variant of S 2 Agent which learns policy with only policy shaping to reconcile the final action.",
"S 2 Agent w/o ps is a variant of S 2 Agent but only has reward shaping to bonus state-actions similar to demonstrations.",
"Implementation Details Imitation model agents for all domains are single layer MLPs with 50 hidden dimensions and tanh as the activation function.",
"The IM agent is also used in policy shaping to reconcile the policy.",
"All RL-based agents (DQN, DQfD, S 2 Agent ) are MLPs with tanh activations.",
"Each policy network Q(.) has one hidden layer with 60 hidden nodes.",
"All the agents are trained with the same set of hyper-parameters.",
"(cid:15) -greedy is utilized for policy exploration.",
"We set the discount factor as = 0 .",
"9 .",
"The target network is updated at the end of each epoch.",
"To mitigate warm-up issues, We build a naive but occasionally successful rule-based agent to provide experiences in the beginning.",
"For a fair comparison, we pre-fill the experience replay buffer D a with human demonstrations for all the variants of agents (Lipton et al., 2016).",
"Confidence factor C used in policy shaping is set 0.7.",
"As for the reward shaping, in equ.7 is set as 1.",
"Training RL-based dialogue agents require an environment to interact with, and it usually needs a large volume of interactions to achieve good performance, which is not affordable in reality.",
"It is commonly acceptable to employ a user simulator to train RL-based agents (Jain et al., 2018; Li et al., 2016; Schatzmann et al., 2007).",
"We adopt a public available agenda-based user simulator (Li et al., 2016) for our experiment setup.",
"During training, the simulator provides the agent with responses and rewards.",
"The reward is defined as -1 for each turn to encourage short turns and a",
"large positive reward ( 2 L ) for successful dialogue or a negative reward of L for failed one, where L (set as 70) is the maximum number of turns in each dialogue.",
"A dialogue is considered successful only if the agent helps the user simulator accomplish the goal and satisfies all the user's search constraints.",
"In addition, the average number of turns and the average reward are also reported to evaluate each model.",
"Main Results.",
"The main simulation results are shown in Table 1 and Figure.2, 3, 4.",
"The results show that with shaping mechanisms, S 2 Agent learns much faster and performs consistently better than DQN and DQfD in all the domains with a statistically significant margin.",
"Figure 2 shows the learning curve of different agents in different domains.",
"Firstly, the DQN agent performs better than the IM agent, which is not surprising since it interacts with the simulator and is optimized to solve user goals.",
"DQfD and EAPC agents leverage human demonstrations to mitigate the reward sparsity issues.",
"Their performances are consistently better than DQN.",
"Besides, S 2 Agent w/o ps uses reward shaping to alleviate reward sparsity by bonusing additional rewards for states that are consistent with demonstrations.",
"As a consequence, it performs better than DQN in all the domains.",
"Though EAPC has a similar reward shaping mechanism, its reward estimation relies heavily on the qualify of the action prediction model.",
"As such, EAPC performs slightly worse than S 2 Agent w/o ps .",
"In addition, policy shaping reconciles the agent action with knowledge learned from human demonstrations.",
"It biases the agent to explore these actions which human expert does.",
"As shown in figure 2, S 2 Agent w/o rs learn the dialogue policy much faster than all the baselines.",
"In the Movie domain, it achieves nearly a 60% success rate using only 20 epochs.",
"By contrast, the second-best agent DQfD only achieves a 20% successful rate at epoch 20.",
"Similar results are also observed in Restaurant and Taxi domains.",
"When integrating both policy shaping and reward shaping to DQN, S 2 Agent achieves the best performance and is more data-efficient.",
"For example, S 2 Agent in the Taxi domain achieves approximately 60% successful rate at 50 epoch while the following competitor only has around 40% successful rate.",
"The above observation also confirms that policy shaping and reward shaping operate in different dimensions, which means policy shaping improves the learning by directly calibrating in the action space and reward shaping in the value function space, and are mutual-complementary.",
"Noted that the improvement of combining policy shaping and reward shaping in the Movie domain is not as significant as that in Restaurant and Taxi.",
"This is too large degree attributed to the increased complexity of Restaurant and Taxi dataset, which have two times more slots than the Movie dataset, meaning that the state-action space is much larger than the movie domain and posing more challenges in exploration.",
"Under this situation, policy shaping and reward shaping benefit the S 2 Agent to a large extent.",
"demonstrations.",
"Intuitively, the number of human demonstrations has a large impact on policy learning.",
"The imitation model agent might be able to summarize a good expert policy when a large volume of human demonstrations is available.",
"However, we hope the shaping mechanism is capable of improving learning efficiency with limited human demonstrations for RL-base agents.",
"As such, we Figure 4: Learning curves of different agents in Movie-Ext domain, all the agents are adapted from trained agents in Movie domain.",
"experiment with different sizes of demonstrations between 25 and 125 to asses the effect of different numbers of human demonstration on learning efficiency and quality.",
"Figure 3 shows the average performance of each agent during learning, which indicates the learning speed and quality.",
"Our proposed shaping mechanisms improve policy learning speed and quality and are robust to the number of demonstrations.",
"Even with the small number of human demonstrations as 25, S 2 Agent achieves a 5% higher success rate than DQfD and EAPC in the Movie domain and 10% in the Taxi domain.",
"As the number of demonstrations increases, the gap between DQfD and S 2 Agent becomes larger, showing that policy mechanisms can still benefit from more human demonstrations available.",
"Results of domain extension Typically, RL-based agents are built with a fixed ontology.",
"However, a dialogue system should be able to evolve as being used to handle new intents, slots, unanticipated actions from users.",
"To asses the ability of quickly adapting to the new environment, we extend existing movie user simulator, denoted as Movie-Ext, to simulate domain adaption scenario.",
"Movie-Ext has an additional payment task requiring the agent to converse with users to firstly book a ticket and then finish the payment.",
"Details about the extended intent/slots can be found in the in appendix Table.3.",
"All the agents are continually optimized from the previously trained agents for the movie ticket booking task.",
"Meanwhile, we additionally collect a small number of human demonstrations to update the IM agent.",
"Figure 4 shows the learning curves of different agents on the extended task.",
"As we can see, both S 2 Agent and S 2 Agent w/o rs can quickly adapt to the new environment and outperform the IM agent, with only 150 epochs it achieves around 50% success rate.",
"Though DQfD explicitly leverages human demonstrations, it still lags behind w/o rs , showing that shaping in the policy space is more effective than solely adding supervised learning loss for Q-learning.",
"Reward shaping also benefits DQN to explore better policy.",
"These observations confirm that S 2 Agent with shaping mechanism is capable of quickly adapting to the new environment.",
"User simulators are not necessary to reflect the complexity of human users (Dhingra et al., 2017).",
"To further evaluate the feasibility of S 2 Agent in real scenarios, We deploy the agents in Table 1 to interact with real human users in Movie and Movie-Ext domains 5 .",
"All evaluated agents are trained with 50 epochs and 200 epochs for Movie and Movie-Ext respectively.",
"In each dialogue session, one of the agents is randomly selected to converse with a human user.",
"Each user is assigned with a goal sampled from the corpus and is instructed to converse with the agent to complete the task.",
"Users have the choice of terminating the task and ending the session at any time if users believe that the dialogue is unlikely to succeed or simply because the agent repeats for several turns.",
"In such a case, the session is considered as a failure.",
"Finally, at the end of each session, users are required to give explicit feedback on whether the dialogue succeeded (i.e., whether 5 For the time and cost consideration, we only conduct experiments on Movie and Movie-Ext domains.",
"the movie tickets were booked (and paid) with all the user constraints satisfied).",
"Additionally, users are requested to rate the session on a scale from 1 to 5 about the quality/naturalness (5 is the best, 1 is the worst).",
"We collect 50 dialogue sessions for each agent.",
"The results are listed in Table 2.",
"S 2 Agent and S 2 Agent w/o rs perform consistently better than DQN and DQfD, which is consistent with what we have observed in simulation evaluation.",
"In addition, S 2 Agent achieves the best performance in terms of success rate and user rating.",
"In this paper, we present a new strategy for learning dialogue policy with human demonstrations.",
"Compared with previous work, our proposed S 2 Agent is capable of learning in a more efficient manner.",
"By using policy shaping and reward shaping, S 2 Agent can leverage knowledge distilled from the demonstrations to calibrate actions from underlying RL agents for better trajectories, and obtains extra rewards for these state-actions similar to demonstrations alleviating reward sparsity for better exploration.",
"The results of simulation and human evaluation show that our proposed agent is efficient and effective in both single domain and a challenging domain adaptation setting.",
"We appreciate the efforts from the anonymous reviewers; they have helped us improve this paper a lot.",
"The research described in this paper is partially supported by Hong Kong RGC-GRF grant 14204118."
] | [
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"other",
"other"
] |
[
"We address the problem of enhancing model robustness through regularization.",
"Specifi-cally, we focus on methods that regularize the model posterior difference between clean and noisy inputs.",
"Theoretically, we provide a connection of two recent methods, Jacobian Regularization and Virtual Adversarial Training, under this framework.",
"Additionally, we generalize the posterior differential regularization to the family of f -divergences and characterize the overall framework in terms of Jacobian matrix.",
"Empirically, we compare those regular-izations and standard BERT training on a diverse set of tasks to provide a comprehensive profile of their effect on model generalization.",
"For both fully supervised and semi-supervised settings, we show that regularizing the posterior difference with f -divergence can result in well-improved model robustness.",
"In particular, with a proper f -divergence, a BERT-base model can achieve comparable generalization as its BERT-large counterpart for in-domain, adversarial and domain shift scenarios, indicating the great potential of the proposed framework for enhancing NLP model robustness.",
"1 1 Introduction Although recent neural network based models have achieved great success in a wide range of natural language processing (NLP) tasks, these models may still suffer catastrophic degradation in out-of-domain generalization to datasets with domain shift or adversarial scenarios (Nie et al., 2019; Hsieh et al., 2019).",
"For example, large-scale pretrained neural language models (Devlin et al., 2019; Liu et al., 2019) have better generalization, but experience performance reduction caused by domain shift (Hendrycks et al., 2020).",
"Textual entailment models trained on MNLI (Williams et al., 2018) have picked up superficial cues focusing on either the 1 Code is available at https://github.com/ hao-cheng/f-divergence .",
"presence of certain keywords (Gururangan et al., 2018) or whether similar words are mentioned in the sentence pairs (McCoy et al., 2019).",
"Jia and Liang (2017) have also shown that SQuAD models are very easily distracted by irrelevant sentences that contain many question words regardless of context, despite of their human-performance on in-domain data.",
"As shown in Figure 1, three BERT-based (Devlin et al., 2019) models perform well for in-domain evaluation data, but transfer poorly to out-of-domain datasets with domain shift or adversarial attack, i.e. more than 25% relative performance reduction.",
"Achieving good generalizations towards datasets with domain shift has been a long-standing goal of domain adaptation.",
"Various methods (Blitzer et al., 2007; Daum III, 2007) have been developed for training models to learn effectively from both in-domain (source) and out-of-domain (tar-get) datasets.",
"Additionally, the recent discovery on the prevalence of data biases (Gururangan et al., 2018; Jia and Liang, 2017; McCoy et al., 2019), unintended correlations between input and output learned by statistical models, ignites the development of model debiasing techniques (Clark et al., 2019; He et al., 2019).",
"These methods leverage discovered data biases to improve model generalization over adversarial datasets (Jia and Liang, 2017; McCoy et al., 2019) designed to fool naively trained models.",
"Instead of relying on knowledge of the target dataset, we focus on task-agnostic training techniques for enhancing model robustness with access to only in-domain data.",
"Motivated by recent success of adversarial training in computer vision (Madry et al., 2018; Good-fellow et al., 2014) and NLP (Zhu et al., 2020; Jiang et al., 2020; Cheng et al., 2019; Wang et al., 2019), we investigate a regularization framework, which directly regularizes the model posterior difference for clean and noisy inputs, as a means to enhance the model robustness.",
"Here, we first provide a theoretical connection of two recent methods under this framework, i.e. Virtual Adversarial Training (VAT) (Miyato et al., 2018) and Jacobian Regularization (JR) (Sokolic et al., 2017).",
"In addition, we propose to generalize VAT and random perturbation training (RPT) (Miyato et al., 2018) with a family of probability distribution metrics, f -divergences, and characterize their connection with JR.",
"Given that large-scale pretrained neural language models have demonstrated their superior generalization for downstream NLP tasks under both matched (Devlin et al., 2019; Liu et al., 2019) and mismatched evaluations (Hendrycks et al., 2020), we systematically study the regularization framework using BERT (Devlin et al., 2019) on a diverse set of tasks in terms of both in-domain and out-of-domain generalization.",
"Specifically, we use representative datasets (Socher et al., 2013; Williams et al., 2018; Rajpurkar et al., 2016, 2018) from sentiment analysis, textual entailment and question answering (QA) for in-domain training and evaluation.",
"In order to assess the resulting model generalization over domain shift and adversarial attack, we then consider out-of-domain datasets (Tsatsaronis et al., 2015; Maas et al., 2011) and challenge adversarial datasets (Jia and Liang, 2017; McCoy et al., 2019) in a zero-shot learning fashion.",
"Our experiments show that regularizing the posterior difference for clean and noise inputs is very effective in improving model generalization under both supervised and semi-supervised learning settings.",
"Based on our theoretical analysis, both VAT and RPT variants, unlike JR, incorporate model confidence for adaptive regularization, which leads to consistently better empirical robustness over BERT with the standard fine-tuning.",
"Furthermore, we find that different f -divergences lead to different generalization behaviors for in-domain, domain shift and adversarial settings.",
"In our study, VAT with symmetric divergence achieve better generalization for in-domain and domain shift cases, while VAT with asymmetric divergence achieve more robustness toward adversarial attack.",
"More importantly, we show that a BERT-base model trained with a proper f -divergence can perform comparably to its corresponding BERT-large counterpart.",
"It is also worth noting that VAT with symmetric divergence lead to improved data efficiency, i.e. achieving comparable in-domain performance as fully-supervised models with only 50 % labelled data.",
"This further illustrates the great potential of the proposed general regularization framework for the semi-supervised setting.",
"Our main contributions are summarized as follows: 1) we generalize the posterior differential regularization framework to the family of f -divergences and provide additional divergence functions with different characteristics for regularization; 2) based on our framework, we analyze the family of regularization methods and show the theoretical connection of recently proposed methods, JR, VAT and RPT, in terms of their regularization effect on the input-output Jacobian matrix; 3) We provide a comprehensive profile of different regularization methods over a diverse set of NLP tasks and experimental insight into which f -divergence is more suitable for improving NLP model robustness under both supervised and semi-supervised settings.",
"In this section, we first introduce the regularization framework that penalizes the difference of posterior between clean and noisy inputs.",
"Based on this framework, we set up the basic notions of two recent methods, i.e. VAT and JR, and show their theoretical connection.",
"Finally, we generalize the posterior differential regularization with any function from the family of f -divergences and characterize their local smoothness promoting in terms of Jacobian matrix.",
"In the following, we use f ( x ) : R n R m to denote the posterior function which is a neural network parameterized by that maps the input x R n to the output probability space with m discrete classes.",
"Both adversarial learning (Goodfellow et al., 2014) and the posterior difference regularization aim at making the model more robust.",
"Adversarial learning focuses on minimizing the following objective min max (cid:107) (cid:15) (cid:107) c L ( f ( x + (cid:15) ) , y ) , (1) where L is the cross-entropy loss, (cid:15) R n is a random vector bounded in a norm by c , a small positive constant and y is the target label.",
"Instead, the posterior differential regularization directly promotes the model local smoothness, e.g. stabilizing the model posterior distribution towards small input perturbations.",
"Typically, it is in the form of min L ( f ( x ) , y ) + R ( f ( x ) , f ( x )) , (2) where x = x + (cid:15) , R is a regularization term penalizing the model instability, and is a hyperpa-rameter for balancing the classification loss and the regularization term.",
"As we can see, posterior differential regularization is a task-agnostic method which makes it applicable to semi-supervised, self-supervised and unsupervised learning.",
"For simplicity, we will use f and R to denote f and R ( f ( x ) , f ( x )) , respectively.",
"Jacobian Regularization: A recent regularization approach to stabilize the model is Jacobian regularization (Sokolic et al., 2017; Li et al., 2016).",
"Specifically, using the input-output Jacobian matrix, J = J f ( x ) R m n , we can get the first-order Taylor approximation f ( x ) = f ( x + (cid:15) ) = f ( x ) + J f ( x ) (cid:15).",
"In order to reduce the overall model sensitivity to the input perturbation, Sokolic et al. (2017) propose to directly regularize the Frobenius norm of the input-output Jacobian matrix so that",
"where (cid:107) (cid:107) 2 is the L 2 norm, (cid:107) (cid:107) sp is the spectral norm, and (cid:107) J (cid:107) 2 F = tr( JTJ ) is the Frobeinus norm of the Jacobian matrix with tr as the trace operator.",
"In other words, by letting R = (cid:107) J (cid:107) 2 F , the L 2 difference between clean and noisy inputs is thus being effectively regularized.",
"et al., 2014), Miyato et al. (2018) introduce a regularized objective to enhance the model robustness towards small input perturbations",
"where KL is the well-known Kullback-Leibler divergence , and y = f ( x ) .",
"Based on the above definition, VAT essentially regularizes the KL-based worst-case posterior difference between the clean and noisy input using an inner loop to search for the most adversarial direction.",
"Although sharing with JR the same spirit of encouraging the local smoothness of the model, JR and VAT are not fully theoretically connected.",
"In what follows, we use a simple approach to draw the theoretical connection between these two methods.",
"Connection between VAT and JR: Here, we show that VAT and JR can be directly related through the definition of induced matrix norm.",
"Specifically, the matrix norm of the Jacobian matrix is (cid:107) J (cid:107) = sup (cid:107) (cid:107) =1 (cid:107) J (cid:107) , (5) where the matrix norm on the left side is induced by the corresponding vector norm on the right side.",
"It is easy to show that c 2 (cid:107) J (cid:107) 2 sp sup (cid:107) (cid:15) (cid:107) 2 = c (cid:107) J(cid:15) (cid:107) 22 sup (cid:107) (cid:15) (cid:107) 2 = c (cid:107) f ( x ) f ( x ) (cid:107) 21 2 sup (cid:107) (cid:15) (cid:107) 2 = c KL ( f ( x ) , f ( x )) , and the last inequality is attained based on Pinsker's inequality.",
"Therefore, the VAT regularization provides an upper bound for the spectral norm of the Jacobian matrix.",
"Although a similar attempt to relate JR and VAT has been first explored in (Abbas et al., 2016), we provide a simple and comprehensive connection.",
"Specifically, both VAT and JR regularize the upper bound of the spectral norm of the Jacobian matrix.",
"Posterior Differential Regularization with f divergence: Although both VAT and JR have been successful in improving model robustness, they are both special cases of regularizing the model posterior difference between the clean and noisy inputs.",
"One natural question is whether we can use other probability distribution metrics for regularization and characterize them in terms of Jacobian matrix.",
"In the following, we extend the posterior difference regularization with the family of f -divergences (Csiszr and Shields, 2004).",
"Furthermore, we show that posterior differential regularization with all f -divergences results in an adaptive variant of JR which incorporates model confidence.",
"where the generator function g : R + (cid:55) R is a convex and lower-semicontinuous function satisfying g (1) = 0 , x = x + (cid:15) and f i indicates the i -th element of vector f .",
"Different choices of g lead to several popular divergences, e.g. KL, squared Hellinger and Jensen-Shannon divergence.",
"Based on this, it is easy to show that the corresponding second order approximation is D g ( f ( x ) , f ( x )) g (cid:48)(cid:48) (1) 2 (cid:15) TJT diag (cid:18) 1 f (cid:19) J(cid:15), (7) where J is the input-output Jacobian of f , and diag (cid:16) 1 f (cid:17) is a diagonal matrix with elements equal to 1 f (See Appendix A for full derivation).",
"Compared with the Frobenius norm of Jacobian matrix (cid:107) J (cid:107) 2 F , Equation 7 can be seen as a weighted version of JR where each row is rescaled by the model confidence f i for the corresponding class.",
"In other words, it is close to JR for more confident classes, whereas for uncertain classes it allows less Jacobian variance.",
"Additionally, although g (cid:48)(cid:48) (1) is a constant once the generator function is selected, various f -divergences can lead to different approximations which might result in task-dependent benefits.",
"Therefore, different from KL-based VAT or its sampling alternative without the inner search for the most adversarial direction as proposed in (Miyato et al., 2018), we generalize the posterior differential regularization with the family of f divergences and show that they all provide an approximation to a variant of JR which adapts the regularization based on model confidence.",
"Given its superior performance over a wide range of NLP tasks, we focus on exploring different training",
"training techniques using BERT (Devlin et al., 2019).",
"We first describe the BERT representations used for all tasks considered in this paper.",
"Then, two variants of task-specific BERT-based models are introduced: 1) the sentence-level classifier for textual entailment and sentiment analysis, and 2) the extractive QA model.",
"Specifically, we focus on different ways of encoding input text and building task-specific layers using BERT representations.",
"BERT Representation: For all tasks considered in this work, an input text sequence is divided into subword units w t , t = 1 , . . . , T .",
"The tokenized input sequence is then transformed into embed-dings, x 1 , . . . , x T R n , through a token encoder, which combines a token embedding, a (token) position embedding and a segment embedding (i.e., which text span the token belongs to) by element-wise summation.",
"The embedding layer is used as the input to multiple transformer layers (Vaswani et al., 2017) to generate the contextual representations, h 1 , . . . , h T R d , which are the hidden states of the last layer of the BERT model.",
"For all regularizations, we sample noise vectors (cid:15) 1 , . . . , (cid:15) T from N (0 , I ) , and normalize each vector into L2 unit vector.",
"The noise input is then constructed by adding the normalized noise vector to the token embeddings, i.e. x 1 + c(cid:15) 1 , . . . , x T + c(cid:15) T .",
"Here, we fix c = 1e 3 in this paper.",
"Sentence-level Classifier: Following the standard setup of BERT-based textual entailment model (De-vlin et al., 2019), a pair of premise and hypothesis is converted into an input sequence in the form of \" [CLS] premise [SEP] hypothesis [SEP] \".",
"Here, [CLS] is a special token indicating the start of the whole sequence and [SEP] is another special token for separating the two sentences.",
"For sentiment analysis, a single sentence is converted to the form of \" [CLS] sentence [SEP] \".",
"For both classification tasks, the task-specific layer only takes the first hidden vector h [ CLS ] produced by BERT, corresponding to the [CLS] token.",
"Then, the probability of class k is P ( k | w 1 , . . . , w T ) W Ck h [ CLS ] , (8) where WC R m d is the learnable parameter, the subscript k indicates the k -th row of the matrix, and the bias term is left out for simplicity.",
"For standard BERT training, the log-likelihood based on Equation 8 is used.",
"For regularized models, the regularization term is added to stabilize the class probability change with regard to the input noise.",
"Extractive QA Model: For extractive QA, the probability space outcomes consist of token positions of answer spans.",
"Given a pair of question q and a passage p in the form of \" [CLS] question [SEP] passage [SEP] \", the BERT encoder produces contextualized representations for all tokens in the input.",
"Specifically, for each token position t in p , the final hidden vector h t R d is used as the contextualized token embedding, where d is the vector dimension.",
"The span-begin score is computed as s b ( i ) = w Tb h i using a weight vector w b R d .",
"The probability for a start position i is P b ( i ) = exp( s b ( i )) Z b , (9) where Z b is the normalizing factor computed by normalizing over I (the set of all possible positions in the passage), i.e. Z b = (cid:80) i I exp( s b ( i )) .",
"The span-end score s e ( j ) , the probability P e ( j ) for an end position j , and the normalizing factor Z e are defined in the same way.",
"The probability of an answer span ( i, j ) is P ( i, j ) = P b ( i ) P e ( j ) = exp( s b ( i ) + s e ( j )) Z b Z e .",
"Maximizing the log-likelihood of the above equation is equivalent to maximizing the log probabilities for the correct start and end position, respectively.",
"For regularized models, given it is computationally expensive to enumerate all possible spans, we apply two separate regularization terms for the start and end position probabilities, respectively.",
"In this section, we apply the regularization methods discussed so far to BERT and evaluate their performance on the model robustness.",
"Specifically, we consider two types of posterior regularization with f -divergences.",
"In addition to a VAT-like regularization with an inner search for the most adversarial direction, following (Miyato et al., 2018), we also evaluate the random perturbation training (RPT) with the family of f -divergences which only uses randomly sampled noise for regularization.",
"In this work, we focus on three representative f divergences, i.e. KL, squared Hellinger (SHL) and Jensen-Shannon divergence (JSD).",
"Dataset: All the datasets used in this paper are summarized in Table",
"1. We consider three tasks, i.e. QA, textual entailment, and sentiment analysis, where the last two are sentence classification tasks.",
"Following the literature, we report the exact match (EM) and F1 scores for QA datasets and classification accuracy for textual entailment and sentiment analysis.",
"For model training, we use MNLI (Williams et al., 2018) and SST-2 (Socher et al., 2013) and SQuAD v1.1/v2.0 (Rajpurkar et al., 2016, 2018), respectively.",
"The corresponding development set is used for evaluating the in-domain generalization.",
"To evaluate the out-of-domain generalization with domain shift, we use the BioAQS dataset (Tsatsaronis et al., 2015) from MRQA (Fisch et al., 2019) and the IMDB dataset (Maas et al., 2011).",
"Unlike SQuAD which is based on Wikipedia, BioAQS is a biomedical QA dataset constructed on PubMed articles.",
"Compared with SST-2 containing pithy export reviews (Socher et al., 2013), IMDB includes lengthy movie reviews from non-experts (Maas et al., 2011).",
"We directly apply the QA model trained on SQuAD v2.0 and the sentiment classifier trained on SSS-2 to BioAQS and IMDB, respectively.",
"To evaluate the model robustness towards adversarial attack, we use two challenging adversarial datasets, i.e. Adversarial SQuAD (Jia and Liang, 2017) and HANS (McCoy et al., 2019) for evaluating QA model trained on SQuAD v1.1 and the textual entailment model trained on MNLI, respectively.",
"The Adversarial SQuAD is constructed based on SQuAD v1.1 (Rajpurkar et al., 2016) by adding distracting sentences that have high overlap with the question and contain plausible answer candidates.",
"Naively trained models tend to exploit the word overlap with the given question and thus are fooled by those distracting sentences (Jia and Liang, 2017).",
"The HANS dataset is built using three heuristics to ensure that the hypothesis sentence only contains words from the premise sentence (McCoy et al., 2019).",
"Similarly, standard training results in models failing catastrophically, even for BERT.",
"Implementation: We follow the default setting used for fine-tuning the uncased BERT base model (Devlin et al., 2019).",
"We select the learning rate from { 3e 5 , 4e 5 } for QA models and { 2e 5 , 3e 5 } for classification models.",
"For both tasks, we tune the number of training epochs in { 2 , 3 , 4 , 5 } .",
"In addition, we search regularization weight in { 0 .",
"001 , 0 .",
"01 , 0 .",
"1 } for JR, and { 1 , 4 , 10 } for VAT and RPT.",
"We use the in-domain dev set for validation and select the best model based on F1 Task Training Metrics Evaluation Domain Shift Adversarial Attack Question Answering SQuAD v1.1 F1/Exact Match (EM) N/A Adversarial SQuAD Question Answering SQuAD v2.0 F1/Exact Match (EM) BioASQ N/A Textual Entailment MNLI Accuracy (Acc) N/A HANS Sentiment Analysis SST-2 Accuracy (Acc) IMDB N/A Table 1: Summary of datasets and their corresponding evaluation purpose and metrics.",
"In-domain: In this part, we focus on comparing the in-domain performance of different training methods.",
"In other words, each model is trained on the training set and evaluated on the corresponding matched development set.",
"The experiment is summarized in Table",
"2. In general, JR performs similarly to the standard BERT training with an exception case for SQuAD v2.0.",
"This is probably because JR uniformly regularizes the Jacobian matrix, which is particularly problematic for QA task with unanswerable questions.",
"Both RPT and VAT with different f -divergences achieve significant improvement over standard training for all four datasets, especially on SQuAD v2.0.",
"The results suggest incorporating the model confidence into regularization can achieve better in-domain Method BioASQ IMDB Avg F1/EM Acc BERT base 57.1/41.7 87.7 0.0 JR 60.8/46.0 87.4 +1.7 RPTKL 59.2/43.6 88.7 +1.6 RPTSHL 60.0/44.8 88.7 +1.9 RPTJSD 58.3/43.2 88.3 +0.9 VATKL 60.1/45.7 86.7 +1.0 VATSHL 60.7/45.9 87.4 +1.7 VATJSD 61.8/47.0 88.3 +2.6 BERT large 63.5/49.5 88.3 +3.5 Table 3: Domain shift evaluation of different training techniques on BioASQ and IMDB.",
"generalization.",
"Consistent with findings in (Miyato et al., 2018), by searching for the most adversarial perturbation direction, VAT variants achieve the largest boost for in-domain generalization.",
"Moreover, we find that both RPT and VAT with SHL and JSD provides additional improvement over their corresponding counterpart with KL which suggests the benefit of using alternative f -divergences for posterior difference regularization.",
"Lastly, by selecting the proper divergence, the performance gap between the BERT-base and BERT-large model is dramatically narrowed which indicates the advantage of applying posterior difference regularization with f -divergences on top of powerful text representations.",
"on datasets with domain shift, e.g. different topic or style.",
"Specifically, we apply the QA models trained on SQuAD v2.0 to the BioAQS version from MRQA (Fisch et al., 2019).",
"Similarly, we apply the sentiment analysis model trained on SST-2 to the IMDB test set.",
"The results are summarized in Table",
"3. Comparing Table 3 with Table 2, all methods suffer a noticeable performance drop for both QA and sentiment analysis when evaluated on test sets with domain shift.",
"Moreover, we observe more significant performance drop for the QA setting because the biomedical domain differs significantly from the Wiki domain in topic and style, resulting in a larger domain shift between the training and test QA datasets.",
"Consistent with findings in (Hendrycks et al., 2020), the in-domain performance is not predictive of the domain shift generalization.",
"Further, the performance of JR is not stable, with improvement on BioASQ but worse performance on IMDB.",
"Models trained with all three RPT variants result in consistent improvement over standard training on both out-of-domain datasets, suggesting that random perturbation is particularly effective in enhancing model robustness towards domain shift.",
"In particular, all RPT variants achieve comparable out-of-domain generalization on IMDB as BERT-Large.",
"Although all VAT variants achieve decent improvement on BioASQ, neither VATKL nor VATSHL generalize so well to IMDB.",
"This illustrates the importance of selecting a proper divergence for VAT style regularization.",
"In other words, domain-dependent search for the most adversarial direction with either KL or SHL might be suboptimal for model generalization over domain shift.",
"Adversarial Attack: Here, we evaluate different training techniques on adversarial attack scenarios, where datasets are intentionally constructed to fool naively trained models.",
"Specifically, we evaluate the QA models trained with SQuAD v1.1 and the textual entailment models learned on MNLI using the Adversarial SQuAD and the HANS datasets, respectively.",
"Table 4 summarizes the evaluation results of model robustness towards adversarial attacks with different training methods.",
"For both subsets (AddSent and AddOneSent) from Adversarial SQuAD and HANS, all regularization methods improve over standard BERT training.",
"In this case, models trained with VAT variants demonstrate stronger resilience towards learning superficial cues from data.",
"Specifically, VAT with KL achieves the largest improvement on both settings which indicates that an asymmetrical divergence might be more effective in avoiding learning data biases.",
"Although better text representations derived from BERT-Large are still more robust against adversarial attack than the base version, this gap can be effectively reduced by regularizing the posterior difference with f -divergences.",
"Compared with the recent debiasing method proposed in (Clark et al., 2019) that requires the knowledge of existing data bias, VAT variants can be an effective task-agnostic debiasing approach with better in-domain performance and comparable improvement for adversarial settings.",
"Semi-supervised Learning: One advantage of regularization methods is their compatibility with semi-supervised learning.",
"Given JR is not very effective for the fully-supervised learning, we focus on evaluating RPT and VAT with f -divergences under the semi-supervised setting.",
"Specifically, we use the two sentence classification datasets, MNLI and SST-2, for training.",
"We hold out 50% of the label information for the training data.",
"For standard BERT training, only the labelled part is used.",
"For both RPT and VAT variants, the rest unlabelled data is also included for training.",
"Both the cross en-SST-2 IMDB MNLI HANS BERT full 92.3 87.7 84.5 58.4 BERT base 91.2 86.3 82.7 51.5 RPTKL 91.3 87.0 83.6 53.8 RPTSHL 91.9 86.6 83.8 53.3 RPTJSD 91.7 86.5 83.7 51.8 VATKL 92.1 86.3 83.1 54.3 VATSHL 92.4 86.5 84.4 51.8 VATJSD 92.2 86.6 84.1 52.6 Table 5: Comparison of different methods with semi-supervised learning in classification accuracy.",
"tropy loss and the regularization term are optimized for the labelled samples, whereas only the regularization term is used for unlabelled ones.",
"Similar to the fully supervised setting, the models trained on MNLI is applied to HANS for evaluating the model robustness towards adversarial attack, and the models using SST-2 are applied to IMDB to assess the model performance under domain shift.",
"Results are summarized in Table",
"5. Compared with the fully supervised setting, all methods get lower classification accuracy across the board.",
"Both RPT and VAT variants again improve over standard training for both in-domain and out-of-domain evaluations.",
"It is worth mentioning that both SHL and JSD based VAT models trained with 50% labelled data on SST-2 and MNLI are on par with the corresponding standard BERT training with the full training set which illustrates the advantage of choosing a proper f -divergence for the semi-supervised setting.",
"With only half labelled data, Both RPT and VAT suffer a large drop on HANS and produce almost random predictions, indicating the complimentary benefits of data diversity.",
"We also further reduce the amount of labelled training data and observe the same trend where regularizing with different f -divergences can lead to improved data efficiency.",
"This demonstrates the potential of posterior differential regularization for NLP with low-resource scenarios.",
"With the goal of developing more robust NLP models, a line of recent work has been devoted to identifying various kinds of superficial patterns learned by high-performance models over many",
"popular datasets (Gururangan et al., 2018; Jia and Liang, 2017; McCoy et al., 2019).",
"The prevalence of data biases over popular datasets poses a real challenge of accurately estimating the model capacity for practical applications, because a closed dataset evaluation usually inflates the model performance.",
"This concern about dataset biases has led researchers to develop new diagnostic datasets (Jia and Liang, 2017; McCoy et al., 2019; Nie et al., 2019) and training techniques (Clark et al., 2019; He et al., 2019) to overcome those discovered biases.",
"Recent debiasing methods (Clark et al., 2019; He et al., 2019) require learning multiple models to access known data biases for the target dataset.",
"Moreover, they achieve more robust out-of-domain generalization at the price of in-domain performance degradation.",
"In contrast, we focus on the task-agnostic robust learning framework for enhancing model robustness and empirically show that regularization approaches under this framework can result in superior in-domain and out-of-domain generalization.",
"Training with noise has been a very popular approach for enhancing model robustness.",
"The dropout is a widely-used approach in deep learning to improve model generalization (Srivastava et al., 2014).",
"For adversarial learning methods, the main theme is reducing the model sensitivity toward small input perturbations (Goodfellow et al., 2014; Madry et al., 2018), which has been recently applied to both fine-turning (Jiang et al., 2020; Pereira et al., 2020; Zhu et al., 2020; Li and Qiu, 2020) and pre-training (Liu et al., 2020).",
"However, models trained with adversarial learning are found to have at-odd generalization (Tsipras et al., 2019; Zhang et al., 2019).",
"Our work studies learning methods with the goal of regularizing the model posterior difference of clean and noisy inputs.",
"We show that compared with the standard BERT training, the proposed posterior differential regularization with f -divergence lead to better NLP model robustness.",
"In this paper, we investigate methods regularizing the posterior difference between the clean and noisy inputs for improving model generalization for both in-domain and out-of-domain settings.",
"Specifically, we present theoretical analyses of three methods under this framework, i.e. VAT, JR, and RPT.",
"We further extend both VAT and PRT to the family of f -divergences and theoretically characterize them in terms of Jacobian matrix.",
"We also demonstrate their effectiveness in enhancing model robustness over a diverse set of NLP tasks under both fully-supervised and semi-supervised scenarios.",
"For future work, it is interesting to explore posterior differential regularization methods for weakly-supervised learning, such as relation extraction and QA with distant supervision.",
"We would like to thank the anonymous reviewers for valuable suggestions.",
"Yaoliang Yu thanks NSERC for funding support."
] | [
"abstain",
"method",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"method",
"result",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"abstain",
"other",
"other"
] |
[
"This paper presents a novel pre-trained language models (PLM) compression approach based on the matrix product operator (short as MPO) from quantum many-body physics.",
"It can decompose an original matrix into central tensors (containing the core information) and auxiliary tensors (with only a small proportion of parameters).",
"With the decomposed MPO structure, we propose a novel fine-tuning strategy by only updating the parameters from the auxiliary tensors, and design an optimization algorithm for MPO-based approximation over stacked network architectures.",
"Our approach can be applied to the original or the compressed PLMs in a general way, which derives a lighter network and significantly reduces the parameters to be fine-tuned.",
"Extensive experiments have demonstrated the effectiveness of the proposed approach in model compression, especially the reduction in finetuning parameters (91 % reduction on average).",
"The code to reproduce the results of this paper can be found at https://github.com/ RUCAIBox/MPOP .",
"Recently, pre-trained language models (PLMs) (De-vlin et al., 2019; Peters et al., 2018; Radford et al., 2018) have made significant progress in various natural language processing tasks.",
"Instead of training a model from scratch, one can fine-tune a PLM to solve some specific task through the paradigm of pre-training and fine-tuning .",
"Typically, PLMs are constructed with stacked Transformer layers (Vaswani et al., 2017), involving a huge number of parameters to be learned.",
"Though effective, the large model size makes it impractical for resource-limited devices.",
"Therefore, there is an increasing number of studies focused Authors contributed equally.",
"on the parameter reduction or memory reduction of PLMs (Noach and Goldberg, 2020), including parameter sharing (Lan et al., 2020), knowledge distillation (Sanh et al., 2019), low-rank approximation (Ma et al., 2019) and data quantization (Hubara et al., 2017).",
"However, these studies mainly apply parameter reduction techniques to PLM compression, which may not be intrinsically appropriate for the learning paradigm and architecture of PLMs.",
"The compressed parameters are highly coupled so that it is difficult to directly manipulate different parts with specific strategies.",
"For example, most PLM compression methods need to fine-tune the whole network architecture, although only a small proportion of parameters will significantly change during fine-tuning (Liu et al., 2020).",
"In this paper, we introduce a novel matrix product operator (MPO) technique from quantum many-body physics for compressing PLMs (Gao et al., 2020).",
"The MPO is an algorithm that factorizes a matrix into a sequential product of local tensors ( i.e., a multi way array).",
"Here, we call the tensor right in the middle as central tensor and the rest as auxiliary tensors .",
"An important merit of the MPO decomposition is structural in terms of information distribution: the central tensor with most of the parameters encode the core information of the original matrix, while the auxiliary tensors with only a small proportion of parameters play the role of complementing the central tensor.",
"Such a property motivates us to investigate whether such an MPO can be applied to derive a better PLM compression approach: can we compress the central tensor for parameter reduction and update the auxiliary tensors for lightweight fine-tuning?",
"If this could be achieved, we can derive a lighter network meanwhile reduce the parameters to be fine-tuned.",
"To this end, we propose an MPO-based compression approach for PLMs, called MPOP .",
"It is developed based on the MPO decomposition technique (Gao et al., 2020; Pirvu et al., 2010).",
"We have made two critical technical contributions for compressing PLMs with MPO.",
"First, we introduce a new fine-tuning strategy that only focuses on the parameters of auxiliary tensors, so the number of fine-tuning parameters can be largely reduced.",
"We present both theoretical analysis and experimental verification for the effectiveness of the proposed fine-tuning strategy.",
"Second, we propose a new optimization algorithm, called dimension squeezing , tailored for stacked neural layers.",
"Since mainstream PLMs usually consist of multiple Transformer layers, this will produce accumulated reconstruction error by directly applying low-rank approximation with MPO at each layer.",
"The dimension squeezing algorithm is able to gradually perform the dimension truncation in a more stable way so that it can dramatically alleviate the accumulation error in the stacked architecture.",
"To our knowledge, it is the first time that MPO is applied to the PLM compression, which is well suited for both the learning paradigm and the architecture of PLMs.",
"We construct experiments to evaluate the effectiveness of the proposed compression approach for ALBERT, BERT, DistillBERT and MobileBERT, respectively, on GLUE benchmark.",
"Extensive experiments have demonstrated the effectiveness of the proposed approach in model compression, especially dramatically reducing the finetuning parameters (91 % reduction on average).",
"Pre-trained Language Model Compression .",
"Since the advent of large-scale PLMs, several variants were proposed to alleviate its memory consumption.",
"For example, DistilBERT (Sanh et al., 2019) and MobileBERT (Sun et al., 2020c) leveraged knowledge distillation to reduce the BERT network size.",
"SqueezeBERT (Iandola et al., 2020) and Q8BERT (Zafrir et al., 2019) adopted special techniques to substitute the operations or quantize both weights and activations.",
"ALBERT (Lan et al., 2020) introduced cross-layer parameter sharing and low-rank approximation to reduce the number of parameters.",
"More studies (Jiao et al., 2020; Hou et al., 2020; Liu et al., 2020; Wang et al., 2020; Khetan and Karnin, 2020; Xin et al., 2020; Pappas et al., 2020; Sun et al., 2020a) can be found in the comprehensive survey (Ganesh et al., 2020).",
"Tensor-based Network Compression .",
"Tensor-based methods have been successfully applied to neural network compression.",
"For example, MPO has been utilized to compress linear layers of deep neural network (Gao et al., 2020).",
"Sun et al. (2020b) used MPO to compress the LSTM model on acoustic data.",
"Novikov et al. (2015) coined the idea of reshaping weights of fully-connected layers into high-dimensional tensors and representing them in Tensor Train (TT) (Oseledets, 2011) format, which was extended to other network architectures (Garipov et al., 2016; Yu et al., 2017; Tjandra et al., 2017; Khrulkov et al., 2019).",
"Ma et al. (2019) adopted block-term tensor decomposition to compress Transformer layers in PLMs.",
"Lightweight Fine-tuning.",
"In the past, lightweight fine-tuning was performed without considering parameter compression.",
"As a typical approach, trainable modules are inserted into PLMs.",
"For example, a side network is fused with PLM via summation in (Zhang et al., 2020), and adapter-tuning inserts task-specific layers (adapters) between each layer of PLMs (Houlsby et al., 2019; Lin et al., 2020; Rebuffi et al., 2017).",
"On the contrary, several studies consider removing parameters from PLMs.",
"For example, several model weights are ablated away by training a binary parameter mask (Zhao et al., 2020; Radiya-Dixit and Wang, 2020).",
"Our work is highly built on these studies, while we have a new perspective by designing the PLM compression algorithm, which enables lightweight fine-tuning.",
"It is the first time that MPO is applied to PLM compression, and we make two major technical contributions for achieving lightweight finetuning and stable optimization.",
"In this paper, scalars are denoted by lowercase letters ( e.g., a ), vectors are denoted by boldface lowercase letters ( e.g., v ), matrices are denoted by boldface capital letters ( e.g., M ), and high-order (order three or higher) tensors are denoted by boldface Euler script letters ( e.g., T ).",
"An n -order tensor T i 1 ,i 2 ,...i n can be considered as a multidimensional array with n indices { i 1 , i 2 , ..., i n } .",
"Matrix Product Operator .",
"Originating from quantum many-body physics, matrix product operator (MPO) is a standard algorithm to factorize a matrix into a sequential product of multiple local tensors (Gao et al., 2020; Pirvu et al., 2010).",
"where the T ( k ) [ d k 1 , i k , j k , d k ] is a 4-order tensor with size d k 1 i k j k d k in which (cid:81) nk =1 i k = I, (cid:81) nk =1 j k = J and d 0 = d n = 1 .",
"We use the concept of bond to connect two adjacent tensors (Pirvu et al., 2010).",
"The bond dimension d k is defined by: d k = min (cid:18) k (cid:89) m =1 i m j m , n (cid:89) m = k +1 i m j m (cid:19) .",
"From Eq.",
"(2), we can see that d k is going to be large in the middle and small on both sides.",
"We present a detailed algorithm for MPO decomposition in Algorithm",
"1. In this case, we refer to the tensor right in the middle as central tensor , and the rest as auxiliary tensor .",
"Figure 1 presents the illustration of MPO decomposition, and we use n = 5 in this paper.",
"MPO-based Low-Rank Approximation .",
"With the standard MPO decomposition in Eq.",
"(1), we can exactly reconstruct the original matrix M through the product of the derived local tensors.",
"Following (Gao et al., 2020), we can truncate the k -th bond dimension d k (see Eq.",
"(1)) of local tensors to d (cid:48) k for low-rank approximation: d k > d (cid:48) k .",
"We can set different values for { d k } nk =1 to control the expressive capacity of MPO-based reconstruction.",
"The truncation error induced by the k -th bond dimension d k is denoted by (cid:15) k (called local truncation error ) which can be efficiently computed as: (cid:15) k = d k (cid:88) i = d k d (cid:48) k i , (3) where { i } d k i =1 are the singular values of M [ i 1 j 1 ...i k j k , i k +1 j k +1 ...i n j n ] .",
"The proof can be found in the supplementary materials 1 .",
"Eq.",
"(1) indicates that the reconstruction error is bounded by the sum of the squared local truncation errors, which is easy to estimate in practice.",
"Suppose that we have truncated the dimensions of local tensors from { d k } nk =1 to { d (cid:48) k } nk =1 , the compression ratio introduced by quantum many-body physics (Gao et al., 2020) can be computed as follows: = (cid:80) nk =1 d (cid:48) k 1 i k j k d (cid:48) k (cid:81) nk =1 i k j k .",
"1 https://github.com/RUCAIBox/MPOP",
"The smaller the compression ratio is, the fewer parameters are kept in the MPO representation.",
"On the contrary, the larger the compression ratio is, and the more parameters there are, and the smaller the reconstruction error is.",
"When > 1 , it indicates the decomposed tensors have more parameters than the original matrix.",
"So far, most of pre-trained language models (PLM) are developed based on stacked Transformer layers (Vaswani et al., 2017).",
"Based on such an architecture, it has become a paradigm to first pre-train PLMs and then fine-tunes them on task-specific data.",
"The involved parameters of PLMs can be generally represented in the matrix format.",
"Hence, it would be natural to apply MPO-based approximation for compressing the parameter matrices in PLMs by truncating tensor dimensions.",
"In particular, we propose two major improvements for MPO-based PLM compression, which can largely reduce the fine-tuning parameters and effectively improve the optimization of stacked architecture, respectively.",
"Due to the high coupling of parameters, previous PLM compression methods usually need to fine-tune all the parameters.",
"As a comparison, the MPO approach decomposes a matrix into a list of local tensors, which makes it potentially possible to consider fine-tuning different parts with specific strategies.",
"Next, we study how to perform lightweight fine-tuning based on MPO properties.",
"Parameter Variation from Pre-Training .",
"To apply our solution to lightweight fine-tuning, we first conduct an empirical experiment to check the variation degree of the parameters before and after finetuning.",
"Here, we adopt the standard pre-trained BERT (Devlin et al., 2019) and then fine-tune it on the SST-2 task (Socher et al., 2013).",
"We first compute the absolute difference of the variation for each parameter value and then compute the ratio of parameters with different variation levels.",
"The statistical results are reported in Table",
"1. As we can see, most of parameters vary little, especially for the word embedding layer.",
"This finding has also been reported in a previous studies (Khetan and Karnin, 2020).",
"As discussed in Section 3, after MPO decomposition, the central tensor contains the majority of the parameters, while the auxiliary tensors only contain a small proportion of the parameters.",
"Such merit inspires us to consider only fine-tuning the parameters in the auxiliary tensors while keeping the central tensor fixed during finetuning.",
"If this approach was feasible, this will largely reduce the parameters to be fine-tuned.",
"Theoretical Analysis .",
"Here we introduce entanglement entropy from quantum mechanics (Cal-abrese and Cardy, 2004) as the metric to measure the information contained in MPO bonds, which is similar to the entropy in information theory but replaces probabilities by normalized singular values produced by SVD.",
"This will be more suitable for measuring the information of a matrix as singular values often correspond to the important information implicitly encoded in the matrix, and the importance is positively correlated with the magnitude of the singular values.",
"Following (Calabrese and Cardy, 2004), the entanglement entropy S k corresponding to the k -th bond can be calculated by: S k = d k (cid:88) j =1 v j ln v j , k = 1 , 2 , ..., n 1 , (6) where { v j } d k j =1 denote the normalized SVD eigenvalues of M [ i 1 j 1 ...i k j k , i k +1 j k +1 ...i n j n ] .",
"The entanglement entropy S k is an increasing function of dimension d k as described in (Gao et al., 2020).",
"Based on Eq.",
"(2), the central tensor has the largest bond dimension, corresponding to the largest entanglement entropy.",
"This indicates that most of the information in an original matrix will be concentrated in the central tensor.",
"Furthermore, the larger a dimension is, the larger the updating effect will be.",
"According to (Pirvu et al., 2010), it is also guaranteed in principle that any change on some tensor will be transmitted to the whole local tensor set.",
"Thus, it would have almost the same effect after convergence by optimizing the central tensor or the auxiliary tensors for PLMs.",
"that the affected information during fine-tuning is mainly encoded on the auxiliary tensors so that the overall variations are small.",
"Therefore, for lightweight fine-tuning, we first perform the MPO decomposition for a parameter matrix, and then only update its auxiliary tensors according to the downstream task with the central tensor fixed.",
"Experimental results in Section 5.2 will demonstrate that such an approach is indeed effective.",
"Most of PLMs are stacked with multiple Transformer layers.",
"Hence, a major problem with directly applying MPO to compressing PLMs is that the reconstruction error tends to be accumulated and amplified exponentially by the number of layers.",
"It is thus urgent to develop a more stable optimization algorithm tailored to the stacked architecture.",
"Fast Reconstruction Error Estimation .",
"Without loss of generality, we can consider a simple case in which each layer contains exactly one parameter matrix to be compressed.",
"Assume that there are L layers, so we have L parameter matrices in total, denoted by { M ( l ) } Ll =1 .",
"Let C ( l ) denote the corresponding central tensor with a specific dimension d ( l ) after decomposing M ( l ) with MPO.",
"Our idea is to select a central tensor to reduce its dimension by one at each time, given the selection criterion that this truncation will lead to the least reconstruction error.",
"However, it is time-consuming to evaluate the reconstruction error of the original matrix.",
"According to Eq.",
"(3), we can utilize the error bound (cid:113)(cid:80) n 1 k =1 (cid:15) 2 k for a fast estimation of the yielded reconstruction error.",
"In this case, only one (cid:15) k changes, and it can be efficiently computed via the pre-computed eigenvalues.",
"Fast Performance Gap Computation .",
"At each time, we compute the performance gap before and after the dimension reduction ( d ( l ) d ( l ) 1 ) with the stop criterion.",
"To obtain the performance p after dimension reduction, we need to fine-tune the truncated model on the downstream task.",
"We can also utilize the lightweight fine-tuning strategy in Section 4.1 to obtain p by only tuning the auxiliary tensors.",
"If the performance gap (cid:107) p p (cid:107) is smaller than a threshold or the iteration number exceeds the predefined limit, the algorithm will end.",
"Such an optimization algorithm is more stable to optimize stacked architectures since it gradually reduces the dimension considering the reconstruction error and the performance gap.",
"Actually, it is similar to the learning of variable matrix product states (Iblisdir et al., 2007) in physics, which optimizes the tensors one by one according to the sequence.",
"As a comparison, our algorithm dynamically selects the matrix to truncate and is more suitable to PLMs.",
"Algorithm 2 presents a complete procedure for our algorithm.",
"In practice, there are usually multiple parameter matrices to be optimized at each layer.",
"This can be processed in a similar way: we select some matrices from one layer to optimize among all the considered matrices.",
"Algorithm 2 Training with dimension squeezing.",
"Generally speaking, our approach can compress any PLMs with stacked architectures consisting of parameter matrices, even the compressed PLMs.",
"In other words, it can work with the existing PLM compression methods to further achieve a better compression performance.",
"Here, we select ALBERT (Lan et al., 2020) as a representative compressed PLM and apply our algorithm to ALBERT.",
"The procedure can be simply summarized as follows.",
"First, we obtain the learned ALBERT model (complete) and perform the MPO-decomposition to the three major parameter matrices, namely word embedding matrix, self-attention matrix and feed-forward matrix 2 .",
"Each matrix will be decomposed into a central tensor and auxiliary tensors.",
"Next, we perform the lightweight fine-tuning to update auxiliary tensors until convergence on downstream tasks.",
"Then, we apply the dimension squeezing 2 It introduces a parameter sharing mechanism to keep only one copy for both self-attention and feed-forward matrices.",
"optimization algorithm to the three central tensors, i.e., we select one matrix for truncation each time.",
"After each truncation, we fine-tune the compressed model and further stabilize its performance.",
"This process will repeat until the performance gap or the iteration number exceeds the pre-defined threshold.",
"In this way, we expect that ALBERT can be further compressed.",
"In particular, it can be fine-tuned in a more efficient way, with only a small amount of parameters to be updated.",
"Section 5.2 will demonstrate this.",
"In mathematics, MPO-based approximation can be considered as a special low-rank approximation method.",
"Now, we compare it with other low-rank approximation methods, including SVD (Henry and Hofrichter, 1992), CPD (Hitchcock, 1927) and Tucker decomposition (Tucker, 1966).",
"We present the categorization of these methods in Table",
"2. For PLM compression, low-rank decomposition is only performed once, while it repeatedly performs forward propagation computation.",
"Hence, we compare their inference time complexities.",
"Indeed, all the methods can be tensor-based decomposition ( i.e., a list of tensors for factorization) or matrix decomposition, and we characterize their time complexities with common parameters.",
"Indeed, MPO and Tucker represent two categories of low-rank approximation methods.",
"Generally, the algorithm capacity is larger with the increase of n (more tensors).",
"When n > 3 , MPO has smaller time complexity than Tucker decomposition.",
"It can be seen that SVD can be considered as a special case of MPO when tensor dimension n = 2 and CPD is a special case of Tucker when the core tensor is the super-diagonal matrix.",
"In practice, we do not need to strictly follow the original matrix size.",
"Instead, it is easy to pad additional zero entries to enlarge matrix rows or columns, so that we can obtain different MPO decomposition results.",
"It has demonstrated that different decomposition plans always lead to almost the same results (Gao et al., 2020).",
"In our experiments, we adopt an odd number of local tensors for MPO decomposition, i.e., five local tensors (see supplementary materials).",
"Note that MPO decomposition can work with other compression methods: it can further reduce the parameters from the matrices compressed by other methods, and meanwhile largely reduce the parameters to be fine-tuned.",
"Datasets .",
"We evaluate the effectiveness of compressing and fine-tuning PLMs of our approach MPOP on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019).",
"GLUE is a collection of 9 datasets for evaluating natural language understanding systems.",
"Following (Sanh et al., 2019), we report macro-score (aver-age of individual scores, which is slightly different from official GLUE score, since Spearman correlations are reported for STS-B and accuracy scores are reported for the other tasks) on the development sets for each task by fine-tuning MPOP.",
"BERT (Devlin et al., 2019): The 12-layer BERT-base model was pre-trained on Wikipedia corpus released by Google.",
"ALBERT (Lan et al., 2020): It yields a highly compressed BERT variant with only 11.6M parameters, while maintains competitive performance, which serves as the major baseline.",
"DistilBERT (Sanh et al., 2019): It is trained via knowledge distillation with 6 layers.",
"MobileBERT (Sun et al., 2020c): It is equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.",
"All these models are released by Huggingface 3 .",
"We select these baselines because they are widely adopted and have a diverse coverage of compression techniques.",
"Note that we do not directly com-3 https://huggingface.co/ Experiments Score SST-2(acc) MNLI(m_cc) QNLI(acc) CoLA(mcc) STS-B( ) QQP(acc) MRPC(acc) RTE(acc) WNLI(acc) Avg.",
"pare our approach with other competitive methods (Tambe et al., 2020) that require special optimization tricks or techniques ( e.g., hardware-level optimization).",
"Implementation .",
"The original paper of ALBERT only reported the results of SST-2 and MNLI in GLUE.",
"So we reproduce complete results denoted as ALBERT rep with the Huggingface implementation (Wolf et al., 2020).",
"Based on the pre-trained parameters provided by Huggingface, we also reproduce the results of BERT, DistilBERT and MobileBERT.",
"To ensure a fair comparison, we adopt the same network architecture.",
"For example, the number of self-attention heads, the hidden dimension of embedding vectors, and the max length of the input sentence are set to 12, 768 and 128, respectively.",
"Note that our focus is to illustrate that our approach can improve either original (uncompressed) or compressed PLMs.",
"In our main experiments, we adopt ALBERT as the major baseline, and report the comparison results in Table",
"3. Comparison with ALBERT .",
"As shown in Table 3, our approach MPOP is very competitive in the GLUE benchmark, and it outperforms ALBERT in all tasks (except MNLI) with a higher overall score of 79.7.",
"Looking at the last column, compared with ALBERT, MPOP reduces total parameters by 22% (#To).",
"In particular, it results in a significant reduction of pre-trained parameters by 91% (#Pr).",
"Such a reduction is remarkable in lightweight fine-tuning, which dramatically improves the fine-tuning effi-ciency.",
"By zooming in on specific tasks, the improvements over ALBERT are larger on CoLA, RTE and WNLI tasks.",
"An interesting explanation is that RTE and WNLI tasks have small training sets (fewer than 4 k samples).",
"The lightweight finetuning strategy seems to work better with limited training data, which enhances the capacity of PLMs and prevents overfitting on downstream tasks.",
"Ablation Results .",
"Our approach has incorporated two novel improvements: lightweight fine-tuning with auxiliary tensors and optimization with dimension squeezing.",
"We continue to study their effect on the final performance.",
"Here we consider three variants for comparison: (1) MPOP full and MPOP full+LFA are full-rank MPO representation (without reconstruction error), and fine-tune all the tensors and only auxiliary tensors , respectively.",
"This comparison is to examine whether only fine-tuning auxiliary tensors would lead to a performance decrease.",
"(2) MPOP dir directly optimizes the compressed model without the dimension squeezing algorithm.",
"This variant is used to examine whether our optimization algorithm is more suitable for stacked architecture.",
"Table 3 (last three rows) shows the results when we ablate these.",
"In particular, the dimension squeezing algorithm plays a key role in improving our approach (a significant performance decrease for MPOP dir ), since it is tailored to stacked architecture.",
"Comparing MPOP full with MPOP full+LFA , it is noted that fine-tuning all the parameters seems to have a negative effect on performance.",
"Compared with ALBERT, we speculate that fine-tuning a large model is more likely to overfit on small datasets ( e.g., RTE and MRPC).",
"These results show that our approach is able to further compress ALBERT with fewer fine-tuning parameters.",
"Especially, it is also helpful to improve the capacity and robustness of PLMs.",
"pressed or compressed PLMs.",
"We have evaluated its performance with ALBERT.",
"Now, we continue to test it with other BERT variants, namely original BERT, DistilBERT and MobileBERT.",
"The latter two BERT variants are knowledge distillation based methods, and the distilled models can also be represented in the format of parameter matrix.",
"We apply our approach to the three variants.",
"Table 4 presents the comparison of the three variants before and after the application of MPOP.",
"As we can see, our approach can substantially reduce the network parameters, especially the parameters to be fine-tuned.",
"Note that DistilBERT and MobileBERT are highly compressed models.",
"These results show that our approach can further improve other compressed PLMs.",
"Evaluation on Different Fine-Tuning Strategies .",
"Experiments have shown that our approach is able to largely reduce the number of parameters to be fine-tuned.",
"Here we consider a more simple method to reduce the fine-tuning parameters, i.e., only fine-tune the last layers of BERT.",
"This experiment reuses the settings of BERT (12 layers) and our approach on BERT ( i.e., MPOPB in Table 4).",
"We fine-tune the last 1-3 layers of BERT, and compare the performance with our approach MPOPB .",
"From Table 5, we can see that such a simple way is much worse than our approach, especially on the RTE task.",
"Our approach provides a more principled way for lightweight fine-tuning.",
"By updating auxiliary tensors, it can better adapt to task-specific loss, and thus achieve better performance.",
"introduced in Section 4.4, MPO is a special low-rank approximation method, and we first compare its compression capacity with other low-rank approximation methods.",
"As shown in Table 2, MPO and Tucker decomposition represent two main categories of low-rank approximation methods.",
"We select CPD (Henry and Hofrichter, 1992) Models SST-2 MRPC RTE",
"for comparison because general Tucker decomposition (Tucker, 1966) cannot obtain results with reasonable memory.",
"Our evaluation task is to compress the word embedding matrix of the released bert-base-uncased model 4 .",
"As shown in Figure",
"2(a), MPO achieves a smaller reconstruction error with all compression ratios, which shows that MPO is superior to CPD.",
"Another hyper-parameter in our MPO decomposition is the number of local tensors ( n ).",
"We further perform the same evaluation with different numbers of local tensors ( n = 3 , 5 , 7 ).",
"From Figure",
"2(b), it can be observed that our method is relatively stable with respect to the number of local tensors.",
"Overall, a larger n requires a higher time complexity and can yield flexi-ble decomposition.",
"Thus, we set n = 5 for making a trade-off between flexibility and efficiency.",
"We proposed an MPO-based PLM compression method.",
"With MPO decomposition, we were able to reorganize and aggregate information in central tensors effectively.",
"Inspired by this, we designed a novel fine-tuning strategy that only needs to fine-tune the parameters in auxiliary tensors.",
"We also developed a dimension squeezing training algorithm for optimizing low-rank approximation over 4 https://huggingface.co/bert-base-uncased stacked network architectures.",
"Extensive experiments had demonstrated the effectiveness of our approach, especially on the reduction of fine-tuning parameters.",
"We also empirically found that such a fine-tuning way was more robust to generalize on small training datasets.",
"To our knowledge, it is the first time that MPO decomposition had been applied to compress PLMs.",
"In future work, we will consider exploring more decomposition structures for MPO.",
"This work was partially supported by the National Natural Science Foundation of China under Grants No. 61872369, 61832017 and 11934020, Beijing Academy of Artificial Intelligence (BAAI) under Grant No.",
"BAAI2020ZJ0301, Beijing Outstanding Young Scientist Program under Grant No.",
"BJJWZYJH012019100020098, the Fundamental Research Funds for the Central Universities and the Research Funds of Renmin University of China under Grant No. 18XNLG22, 19XNQ047, 20XNLG19 and 21XNH027.",
"Xin Zhao and Zhong-Yi Lu are the corresponding authors."
] | [
"objective",
"abstain",
"objective",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"objective",
"result",
"objective",
"objective",
"other",
"other",
"other",
"other"
] |
[
"The bigger the corpus, the more topics it can potentially support.",
"To truly make full use of massive text corpora, a topic model inference algorithm must therefore scale efficiently in 1) documents and 2) topics, while 3) achieving accurate inference.",
"Previous methods have achieved two out of three of these criteria simultaneously, but never all three at once.",
"In this paper, we develop an online inference algorithm for topic models which leverages stochasticity to scale well in the number of documents, sparsity to scale well in the number of topics, and which operates in the collapsed representation of the topic model for improved accuracy and run-time performance.",
"We use a Monte Carlo inner loop in the online setting to approximate the collapsed variational Bayes updates in a sparse and efficient way, which we accomplish via the Metropolis-Hastings Walker method.",
"We showcase our algorithm on LDA and the recently proposed mixed membership skip-gram topic model.",
"Our method requires only amortized O ( k d ) computation per word token instead of O ( K ) operations, where the number of topics occurring for a particular document k d the total number of topics in the corpus K , to converge to a high-quality solution.",
"Topic models are powerful tools for analyzing to-day's massive, constantly expanding digital text information by representing high-dimensional data in a low-dimensional subspace.",
"We can recover the main themes of a corpus by using topic models such as latent Dirichlet allocation (LDA) to organize, understand, search, and explore the documents (Blei et al., 2003).",
"Traditional LDA inference techniques such as variational Bayes and collapsed Gibbs sampling do not readily scale to corpora containing millions of documents.",
"To scale up inference, the main approaches are distributed algorithms (New-man et al., 2008) and stochastic algorithms (Hoff-man et al., 2010, 2013).",
"Stochastic algorithms, such as stochastic variational inference (SVI), operate in an online fashion, and hence do not need to see all of the documents before updating the topics, so they can be applied to corpora of any size, without expensive distributed hardware ( Hoffman et al., 2010).",
"The collapsed representation of topic models is also frequently important, as it leads to faster convergence, efficient updates, and lower variance in estimation ( Griffiths and Steyvers , 2004).",
"The stochastic collapsed variational Bayesian inference (SCVB0) algorithm, proposed by (Foulds et al., 2013), combines the benefits of stochastic and collapsed inference.",
"Larger corpora typically support more topics, which brings the additional efficiency challenge of training a larger model ( Mimno et al., 2012).",
"This challenge has been addressed by exploiting sparsity to perform updates in time sublinear in the number of topics.",
"A sparse variant of the SVI algorithm for LDA, SSVI, proposed by (Mimno et al., 2012), is scalable to large numbers of topics, but does not fully exploit the collapsed representation of LDA, which is important for faster convergence and improved inference accuracy, due to a better variational bound (Teh et al., 2007).",
"The Metropolis Hastings Walker (MHW) method (Li et al., 2014) scales well in the number of topics, and uses a collapsed inference algorithm, but it operates in the batch setting, so it is not scalable to large corpora.",
"LightLDA (Yuan et al., 2015) is a distributed approach to the MHW method which adopts a data-and-model-parallel strategy to maximize memory and CPU efficiency.",
"However, it is not an online approach, and furthermore requires multiple expensive computer clusters to converge faster.",
"Tensor methods are another approach to speeding up topic models (Anandku-mar et al., 2014; Arora et al., 2012), which theoretically guarantee the recovery of the true parameters by overcoming the problem of local optima.",
"These techniques use the method of moments instead of maximum likelihood estimation or Bayesian inference, which leads to lower data efficiency, and sometimes unreliable performance.",
"In this work, we propose a highly efficient and scalable inference algorithm for topic models.",
"We develop an online algorithm which leverages stochasticity to scale well in the number of documents, sparsity to scale well in the number of topics, and which operates in the collapsed representation of topic models.",
"We thereby combine the individual benefits of SVI, SSVI, SCVB0, and MHW into a single algorithm.",
"Our approach is to develop a sparse version of SCVB0.",
"Inspired by SSVI, we use a Monte Carlo inner loop to approximate the SCVB0 variational distribution updates in a sparse and efficient way, which we accomplish via MHW method.",
"To show the generality of our algorithm, we explore the benefits of our inference method for LDA and another recently proposed topic model, MMSGTM, with experiments on both small and large-scale datasets.",
"To build the foundation for our proposed method, in this section we provide the necessary background on LDA and MMSGTM topic models and their associated inference algorithms.",
"This is followed by a description of the MHW sampler for reducing topic model sampling complexity.",
"Probabilistic topic models such as LDA (Blei et al., 2003) use latent variables to encode co-occurrence patterns between words in text corpora and other bag-of-words represented data.",
"In LDA, we assume that the D documents in a corpus are each from mixture distributions of K individual topics k , k 2 f 1 ; :::; K g , each of which are discrete distributions over words.",
"For a document j of length N j , the local (document-level) variables (cid:18) j are a distribution over topics drawn from a Dirichlet prior with parameters (cid:11) k and for each token, global variables (corpus-level) k are drawn from a Dirichlet prior with parameters (cid:12) w .",
"Due to conjugacy, we can marginalize out topics (cid:2) and distributions over topics (cid:8) , and perform inference only on the topic assignments Z in the collapsed representation of LDA (Griffiths and Steyvers, 2004).",
"For scalable and accurate inference, Foulds et al. (2013) proposed a stochastic collapsed variational inference algorithm, SCVB0.",
"The SCVB0 approach computes a variational discrete distribution (cid:13) ij over the K topic assignment probabilities for each word i in each document j , but does not maintain the (cid:13) variables that increase the memory requirement of original batch CVB0 algorithm (Asuncion et al., 2009).",
"SCVB0 iteratively updates each (cid:13) ij using (cid:13) ijk : / N (cid:8) w ij k + (cid:12) w ij N Zk + w (cid:12) w ( N (cid:2) jk + (cid:11) k ) (1) for each topic k , with j th document's i th word w ij .",
"The NZ , N (cid:2) , and N (cid:8) are referred to as CVB0 statistics, where NZ is the vector of expected number of words assigned to each topic, each entry j , k of matrix N (cid:2) , and each entry w , k of matrix N (cid:8) are the expected number of times document j , and word w are assigned to topic k , respectively, across the corpus.",
"To do stochastic updates of these variables, one sequence of step-sizes (cid:26) (cid:8) for N (cid:8) and NZ and another sequence (cid:26) (cid:2) for N (cid:2) are maintained.",
"The update of N (cid:2) j for every token i of document j with an online average of the current value and its expected value is N (cid:2) j := (1 (cid:0) (cid:26) (cid:2) t ) N (cid:2) j + (cid:26) (cid:2) t C j (cid:13) ij (2) where C j is the document length.",
"In practice, it is too expensive to update N (cid:8) after every token.",
"This leads to the use of minibatch updates with the average of the M per-token estimates of the form Y ij , which is a W (cid:2) K matrix with the w ij th row being (cid:13) ij and with zeros in the other entries: N (cid:8) := (1 (cid:0) (cid:26) (cid:8) t ) N (cid:8) + (cid:26) (cid:8) t ^ N (cid:8) (3) NZ := (1 (cid:0) (cid:26) (cid:8) t ) NZ + (cid:26) (cid:8) t ^ NZ , (4) where ^ N (cid:8) = C j M j ij 2 MY ij , ^ NZ = C j M j ij 2 M (cid:13) ij , and C is the number of words in the corpus.",
"The SCVB0 algorithm outperforms stochastic VB (Hoffman et al., 2010) on large corpora by converging faster and often to a better solution (Foulds et al., 2013).",
"However, the SCVB0 algorithm does not leverage sparsity, and hence requires O ( K ) operations per word token.",
"To show the generality of our approach to topic models other than LDA, we will also apply our method to a recent model called the Mixed Membership Skip-gram Topic Model (MMSGTM) (Foulds, 2018), which combines ideas from topic models and word embeddings (cf.",
"also (Das et al., 2015; Liu et al., 2015)).",
"MMSGTM's generative model for words and their surrounding context is: (cid:15) For each word w i in the corpus Sample a topic z i (cid:24) Discrete ( (cid:18) w i ) For each word w c 2 context ( i ) (cid:3) Sample a context word w c (cid:24) Discrete ( z i ) .",
"The inferred model can then be used to train embeddings for topics and words, although we do not consider this here.",
"The MMSGTM admits a collapsed Gibbs sampler (CGS) which efficiently resolves the cluster assignments.",
"With Dirichlet priors on the parameters, the CGS update is p ( z i = k j : ) : / ( N ((cid:8)) (cid:0) i w i k + (cid:11) k ) (cid:2) j context ( i ) j c =1 N ((cid:8)) (cid:0) i w c k + (cid:12) w c + N ( i;c ) w c N ( Z ) (cid:0) i k + w (cid:12) w + c (cid:0) 1 , (5) where (cid:11) and (cid:12) are parameter vectors for Dirichlet priors over the topic and word distributions, N (cid:8) w i and N (cid:8) w c are input and output word-topic counts (excluding the current word), NZ is the total topic counts in output word-topic counts, and N ( i;c ) w c is the number of occurrences of word w c before the c th word in the i th context.",
"MMSGTM exploits the MHW algorithm, which scales sub-linearly in K, but not in the number of training documents.",
"The MHW method (Li et al., 2014), which is a key component of our approach, uses a data structure called an alias table which allows sampling from",
"a discrete distribution in amortized O (1) time.",
"Assuming initial probabilities p 0 ; p 1 ; :::; p l (cid:0) 1 of a distribution over l outcomes and average of probabilities a = 1 l , the alias table A can be formed as follows (Marsaglia et al., 2004): (cid:15) Initialize: for i from 0 to l (cid:0) 1 A alias [ i ] = i and A prob = ( i + 1) a (cid:15) Do the following steps n (cid:0) 1 times Find smallest p i and largest p j Set A alias [ i ] = j and A prob = i (cid:2) a + p i p j := p j (cid:0) ( a (cid:0) p i ) and p i := a .",
"Then, to sample from p using the alias table: (cid:15) Roll l -sided fair die to choose element i of A (cid:15) If Rand (1) < A prob [ i ] return i , else return A alias [ i ] .",
"Li et al. (2014) cache alias table samples, avoiding the need to store the table.",
"Once the supply of samples is exhausted they compute a new alias table.",
"They draw samples from the Gibbs sampling update, analogous to (cid:13) ij in Equation 1, in amortized O ( k d ) time by decomposing the update into p ( z ij = k j : ) / N (cid:2) jk N (cid:8) w ij k + (cid:12) w ij N Zk + w (cid:12) w + (cid:11) k N (cid:8) w ij k + (cid:12) w ij N Zk + w (cid:12) w (6) where the first term, sparse in k d , admits sampling in O ( k d ) time, and the second term is dense but slow changing.",
"A Metropolis-Hastings (M-H) update is used to correct for approximating the CGS update with a proposal distribution q ( k ) based on the stale alias samples.",
"Foulds et al. ( 2018) propose to apply simulated annealing to optimize instead of sample, and which improves mixing for the MMSGTM.",
"This is achieved by raising the model part of the M-H acceptance ratio for a new sample z ( new ) i (cid:24) q ( k ) to the power of 1 T j at iteration j : p ( accept z ( new ) i j : ) = (7) min (1 ; ( p ( z ( new ) i ) p ( z ( old ) i ) ) 1 Tj q ( z ( old ) i ) q ( z ( new ) i )) .",
"In this section, we introduce our approach, a sparse version of SCVB0, which combines the individual benefits of the SVI, SSVI, SCVB0 and MHW algorithms, to scale well not only in the",
"number of documents but also in the number of topics, while gaining the benefits of collapsed inference.",
"We refer to our method as the sparse stochastic collapsed variational Bayesian inference (SparseSCVB0) algorithm.",
"In SparseSCVB0, we obtain sparsity by substituting sparse Monte Carlo approximations (cid:13) ( pseudo ) ij for the original SCVB0 variational distributions (cid:13) ij .",
"The justification for this procedure, also used by (Mimno et al., 2012), is that the expected value of an average over one-hot samples from a distribution is equal to that distribution: E s i (cid:24) p ( s ) [ Si =1 (cid:14) k ( s i ) S ] = 1 SS i =1 E s i (cid:24) p ( s ) [ (cid:14) k ( s i ) ] = 1 SS i =1 k (cid:14) k ( s i = k ) p ( s i = k ) = p ( s = k ) .",
"Thus, the overall procedure is still a valid stochastic optimization algorithm.",
"We approximate the inner loop sampler in time sublinear in K , constructing (cid:13) ( pseudo ) by generating S samples from (cid:13) ij using the MHW method.",
"To describe the general form of our algorithm, we introduce SparseSCVB0 statistics: local (e.g. document-level) expected counts NL , global (corpus-level) expected counts NG , and total expected topic counts NZ .",
"We approximate local sufficient statistics NL for each token i in document j via: N Ljk (cid:25) C j E (cid:13) ( pseudo ) ij [ s 2 S (cid:14) z sij = k S ] .",
"Since (cid:13) ( pseudo ) is sparse, we can efficiently update these statistics using only its non-zero entries.",
"This approach allows us to learn high-dimensional topic models efficiently on very large corpora.",
"Before updating global parameters in a similar fashion, it may also be beneficial to perform a small number of burn-in passes to learn the local parameters NL (Foulds et al., 2013).",
"For large-scale datasets (e.g. Wikipedia), SparseSCVB0 operates in a mini-epoch approach where we process a large subset of the corpus (e.g. 5 ; 000 documents) several (e.g. 3 ) times, before discarding and processing the next subset, and so on.",
"This allows a warm start of NL in repeating iterations, with a small memory overhead.",
"Pseudo-code of SparseSCVB0 for a mini-epoch is provided in Algorithm 1, which we discuss more in the next two sections, including model-specific aspects.",
"To deploy SparseSCVB0 for LDA, we use the M-H proposal distribution from (Li et al., 2014) which involves drawing samples exactly from document-specific sparse terms or approximately using cached samples from the alias table:",
"where p L ( k ) and q G ( k ) represent the sparse and dense part, respectively from Equation 6 and PL = k p L ( k ) , QG = k q G ( k ) .",
"When PLPL + QG > RandUnif (1) , we sample from the sparse part in O ( k d ) time depending on only the non-zero entries of N Lj (analogous to N (cid:2) j ), as N Lj is sparse in the LDA setting.",
"Otherwise, we sample from the dense part in amortized O (1) time using the alias method.",
"Unfortunately, due to the stochastic update, any entry of N Lj never becomes exactly zero, however it may maintain a very small value.",
"To address this, we apply a sparsification heuristic, where we threshold N Lj after burn-in passes for each document iteration.",
"We parameterize the threshold as (cid:28)(cid:26) Lt Cj C j ; where C j is the length of the j document, (cid:26) Lt Cj is the step size for the last token of this document, and constant 0 < (cid:28) (cid:20) 1 controls the sparsity.",
"In our preliminary experiments, we found that, somewhat counter-intuitively, the Monte Carlo and sparsification approximations actually improve convergence in early iterations.",
"We believe that this is because they help SCVB0 escape the initial high-entropy regime, during which convergence of variational algorithms is poor (Salakhutdinov et al., 2003).",
"This property makes the benefit of annealing insignificant, so we do not use simulated annealing for LDA inference, fixing T j = 1 .",
"An additional optimization of SparseSCVB0 for LDA inference can be performed by clump-ing (Teh et al., 2007; Foulds et al., 2013), where one update of the local parameters is performed for each distinct word type in each document.",
"This is performed by fixing the variational distribution, and scaling the update by number of copies of the distinct word type in the document.",
"If we observe the distinct word type w aj , which occurs m aj times in document j , the update is N Lj := (1 (cid:0) (cid:26) Lt ) m aj N Lj + (1 (cid:0) (1 (cid:0) (cid:26) Lt ) m aj ) C j (cid:13) ( pseudo ) aj .",
"The main contribution of our approach for the MMSGTM algorithm is to scale this algorithm in number of documents with online inference, as MMSGTM already scales sublinearly in K using MHW.",
"Foulds et al. (2018) use an MHW proposal which approximates the CGS update, interpreted as a product of experts (Hinton, 2002) in which each word in the context is an expert which weighs in multiplicatively on the update, with a mixture of experts.",
"In the proposal, they draw a Input word = learning Top words in top 2 topics for the input word NIPS SparseSCVB0 algorithms algorithm reinforcement problem problems j learning gradient descent rate weight machine Original SCVB0 learning networks network algorithm neural j propagation function reinforcement gradient algorithms Wikipedia SparseSCVB0 university school college education students research j public center science schools article information Original SCVB0 students school education university college schools j center year program degree systems information Table 4: Top words in the top 2 topics for an input word using original SCVB0 and SparseSCVB0 for MMSGTM.",
"context word w c uniformly from the context of the current word, w c (cid:24) Uniform ( j context ( w i ) j ) , and then sample a word based on the chosen context word's contribution to the update: q ( k ) : / N Gw c k + (cid:12) w c N Zk + w (cid:12) w (10) where N Gw c is analogous to the output context word-topic counts N (cid:8) w c of original MMSGTM model.",
"The proposal samples via the alias method in amortized O (1) time, instead of O ( k d ) time, since it does not involve the sparse term.",
"We use this proposal to approximate the CVB0 update for the model, which is a deterministic version of Equation 5, neglecting to exclude the current assignment of z i .",
"We update N Gw i (analo-gous to N (cid:8) w c ) for each current word w i locally, but update N Gw c and N z via minibatch counts ^ N Gw c and ^ NZ , respectively, for each context word w c of current word w i .",
"Unlike for LDA, C j in the local updates represents the total number of input word j in the corpus.",
"As we draw multiple output words from each topic assignment, the effective temperature of the MMSGTM model is much lower than for standard LDA which may cause problems with mixing and leads it to get stuck in the initial regime.",
"Following Foulds et al. (2018), we perform simulated annealing which varies the M-H acceptance ratio to improve mixing.",
"We parameterize the temperature schedule as T j = T 0 + (cid:21)(cid:20) (cid:22)j D , where T 0 is the target final temperature, (cid:20) (cid:20) 1 , constant (cid:22) controls the amount of temperature reduction after each document iteration j and (cid:21) controls the initial temperature.",
"In this section we study the performance of our SparseSCVB0 1 algorithm, on small as well as large corpora to validate the proposed method for",
"We compared SparseSCVB0 to SCVB0 and SVI.",
"For a fair comparison, we implemented all of them in the fast high-level language Julia V0.6.2 ( Bezanson et al., 2017).",
"We conducted all experiments on a computer with 64GB memory and an Intel Xeon E5-2623 V4 processor with 2.60 GHz clock rate, 8 (cid:2) 256KB L2 Cache and 10MB L3 Cache.",
"As we only use one single thread for sampling across all experiments, only one CPU core is active throughout the experiment with only 256KB available L2 Cache.",
"2 We used NIPS , Reuters-150 , PubMed Central , and Wikipedia as representative very small, small, medium, and large-scale datasets, respectively.",
"The NIPS corpus has 1740 scientific articles from years 1987-1999 with 2 : 3 M tokens, due to Sam Roweis.",
"The newswire corpus Reuters-150 contains 15 ; 500 articles with dictionary size of 8 ; 350 words.",
"PubMed Central has 320 M tokens across 165 ; 000 scientific articles and a vocabulary size of around 38 ; 500 words.",
"The Wikipedia corpus contained 4 : 6 million articles from the online 2 Since SSVI relies on multiple complex implementation details, we were unable to develop a fair implementation, nor were we able to obtain soure code for a previous implementation.",
"We expect that its accuracy would be similar to SVI, with a speed-up at or below that bestowed by a MHW-based inner loop.",
"(Mimno et al., 2012) apply it with only 200 topics for most of their experiments, and at most 1000 topics.",
"network data communication information communicate connection prison prisoners prisoner imprisoned jail escaped detained guards dog dogs shepherd hounds bred coat scent instinct eating companion song sung sing singing sings sang songs recorded melody tune votes vote cast elections voted candidate parties majority election wind winds blowing speed blows direction high low blown chill hour hours noon time daylight minutes midnight morning seconds Table 5: Randomly selected topics from a 10 ; 000 topic model trained using SparseSCVB0 on Wikipedia encyclopedia.",
"We used the dictionary of 7 ; 700 words which was extracted by Hoffman et al. (Hoffman et al., 2013).",
"There were 811 M tokens in the corpus.",
"We implement SparseSCVB0, original SCVB0 and SVI algorithms using the clumping optimization (Teh et al., 2007) technique.",
"In all LDA experiments, each algorithm was trained using mini-batches of size 20 for the NIPS corpus and 100 for other corpora.",
"For PubMed and Wikipedia, we chose mini-epoch subsets of size 5 ; 000 documents and processed for 5 passes.",
"We used a step-size schedule of scale ( (cid:17) + t ) (cid:20) as in original SCVB0 for global parameters, where t is the document iteration with scale = 100 : 0 , (cid:17) = 1000 : 0 and (cid:20) = 0 : 9 .",
"For document-level parameters, we used the scale = 1 : 0 , (cid:17) = 10 : 0 and (cid:20) = 0 : 9 , with t referring here to the word iteration of the current document.",
"In case of PubMed corpus, we found out that original SCVB0 and SVI tend to stuck in the initial regime for document-level step-size parameter (cid:17) = 1 : 0 which we later fixed by setting (cid:17) = 10 : 0 , while SparseSCVB0 didn't suffer from this problem due to the extra randomness from the sparse sampled updates.",
"Finally, we choose hyper-parameters (cid:11) = 0 : 1 and (cid:12) = 0 : 01 and burn-in pass of 5 for each document in all LDA experiments.",
"For SparseSCVB0, we used sample size S = 5 to approximate (cid:13) ( pseudo ) and (cid:28) = 1 =K for the sparsification heuristic on local parameters.",
"To study the acceleration benefits of our approach, we evaluated the runtime performance per Figure 3: Comparison of average log-likelihood vs: Time for LDA on",
"iteration on the number topics and the number of iterations.",
"In Figure",
"1(a), SparseSCVB0 is compared to original SCVB0 in terms of the average runtime per document iteration as a function of the number topics.",
"We see that original SCVB0 requires average linear runtime due to O ( K ) operations to compute collapsed variational distribution, while the average runtime for SparseSCVB0 grows sublinearly in K , due to O ( k d ) operations instead of O ( K ) operations.",
"SparseSCVB0 starts with approximately O ( K ) operations in its initial stage of iterations, but it starts getting a benefit from sparsification heuristic after burning in, as shown in Figure",
"1(b) for K = 10 ; 000 .",
"To evaluate the performance in terms of learned topic quality, we start by comparing all of the algorithms in qualitative experiments (see Table 1, Table 2, and Table 3) where we show randomly selected example topics, while all the models were trained on the NIPS, PubMed, and Wikipedia corpus for K = 500 , K = 1 ; 000 , and 1 ; 000 , respectively.",
"To get a quantitative insight we evaluated the topics using the per topic coherence metric, which measures the semantic quality of a topic based on the W most probable words for the topics ( Mimno et al., 2011), thereby approximating the user viewing experience.",
"In Figure 2, we see that SparseSCVB0 generates better quality topics with higher coherence scores than the other two models for K = 1000 with W = 10 after running all the models on each corpus for the same amount of time.",
"The coherence performance of SparseSCVB0 increases substantially in the case of the large-scale corpus (Figure",
"2(c)), since it gets the opportunity to use its runtime advantage and process more documents than the other algorithms.",
"To investigate model convergence, we measured the held-out log-probability versus wall-clock time for all the algorithms.",
"For each experiment we held-out a set of documents ( 150 documents for NIPS, 3500 documents for Reuters, and 1000 documents for all other corpora) as test data Figure 4: Comparison of runtime per iteration for MMSGTM in terms of:",
"and trained the model on the rest of the corpus.",
"Then, we split each test document in half, estimated local parameters on first half and finally computed the log-likelihood of the remaining half of the document.",
"Figure 3 shows the comparison of average log-likelihood versus wall-clock time for all four corpora.",
"In terms of log-likelihood, SparseSCVB0 provides an approximately similar result to original SCVB0 for the small corpus, but it converged to a better solution than others in the case of large corpora like Wikipedia (see Figure",
"3(c)), likely due to its processing a larger number of documents.",
"SparseSCVB0 enables the large-scale computation needed to learn high-dimensional topic models that could not feasibly be trained using previous methods due to their runtime complexity in the number of documents and/or topics.",
"We show randomly selected topics from the LDA model with K = 10 ; 000 in Table 5.",
"This big topic model was trained for 36 hours using SparseSCVB0 on Wikipedia.",
"We performed a dense initialization, running original SCVB0 for the first 5 hours, which was found to help avoid local optima.",
"We also conducted experiments to evaluate the performance of SparseSCVB0 for MMSGTM and compare with original SCVB0.",
"In all MMSGTM experiments, we kept the same step size schedule for global parameters as scale = 1 : 0 , (cid:17) = 5 : 0 and (cid:20) = 0 : 9 , but for local parameter updates we maintain a separate step-size schedule of scale ( (cid:17) + t ) (cid:20) for each input word, with t referring to the number of times we processed this input word, while (cid:17) and (cid:20) values remained the same.",
"For simulated annealing of SparseSCVB0, we used T 0 = 0 : 00001 , (cid:20) = 0 : 9 , (cid:22) = 5 and (cid:21) = j context j with a context size of 5 .",
"We kept the same number of document burn-in passes as we did for the LDA experiments.",
"In Figure 4, we show the runtime improvement of SparseSCVB0 over original SCVB0 for MMSGTM in a similar experiment to the one for LDA.",
"For MMSGTM, SparseSCVB0 substantially outperforms original SCVB0 by processing each document in amortized O (1) time.",
"We provide qualitative results in the case of MMSGTM model by showing several top words in the top 2 topics for an input word using original SCVB0 and SparseSCVB0 in Table 4 for K = 500 .",
"As for LDA, SparseSCVB0 allows us to generate topics with higher coherence scores compared to the original SCVB0 after running for the same amount of time (Figure 5) on both small and large corpora.",
"This paper introduced SparseSCVB0, a sparse version of the SCVB0 inference algorithm which performs fast, scalable high-dimensional topic model inference.",
"SparseSCVB0 leverages stochasticity to scale well in both the corpus size and in the number of topics.",
"It operates in the collapsed representation of topic models which leads to fast convergence while providing an improved variational bound.",
"We show that SparseSCVB0 reduces the operational complexity for the variational Bayes update of online topic models from O ( K ) to O ( k d ) time for LDA and amortized O (1) time for MMSGTM.",
"We evaluated and compared the performance of our approach with state-of-the-art models such as original SCVB0 and SVI to demonstrate that SparseSCVB0 converges much more efficiently, while maintaining high quality topics with a better per-topic coherence score."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"result",
"objective"
] |
[
"Although some recent works show potential complementarity among different state-of-the-art systems, few works try to investigate this problem in text summarization.",
"Researchers in other areas commonly refer to the techniques of reranking or stacking to approach this problem.",
"In this work, we highlight several limitations of previous methods, which motivates us to present a new framework Refactor that provides a unified view of text summarization and summaries combination.",
"Experimentally, we perform a comprehensive evaluation that involves twenty-two base systems, four datasets, and three different application scenarios.",
"Besides new state-of-the-art results on CNN/DailyMail dataset (46.18 ROUGE-1), we also elaborate on how our proposed method addresses the limitations of the traditional methods and the effectiveness of the Refactor model sheds light on insight for performance improvement.",
"Our system can be directly used by other researchers as an off-the-shelf tool to achieve further performance improvements.",
"We open-source all the code and provide a convenient interface to use it: https://github.com/yixinL7/ Refactoring-Summarization .",
"In neural text summarization, system designers commonly have flexible choices in model architectures (Rush et al., 2015; Kedzie et al., 2018), decoding strategies (Paulus et al., 2018) (e.g. beam search) and etc.",
"As a result, even on the same dataset, different selection biases of these choices will lead to diverse system outputs (Kedzie et al., 2018; Hossain et al., 2020).",
"To combine complementarity of system's output under different setups, researchers have made some preliminary efforts on two-stage learning (Collins and Koo, 2005; Huang, 2008; Gonzlez-Rubio Corresponding author. Figure 1: Illustration of two-stage learning. Doc , Hypo , Ref represent input document , generated hypothesis , gold reference respectively. Hypo' represents texts generated during test phase. Base and Meta represent learnable parameters in two stages. et al., 2011; Mizumoto and Matsumoto, 2016), consisting of",
"(i) a base-stage : first generates different outputs under different setups, and",
"(ii) a meta-stage : then aggregates them in diverse ways, exemplified by stacking that uses a high-level model to combine multiple low-level models (Ting and Witten, 1997), or reranking (Collins and Koo, 2005), which aims to rerank different outputs of one system.",
"Although these methods each play a role in different scenarios, they suffer from following potential limitations:",
"(i) Ad-hoc Methods : most existing methods are designed for a specific scenario.",
"For example, Li et al. (2015) and Narayan et al. (2018b) resort to reranking techniques to select summary-worthy sentences that are usually generated from one system.",
"By contrast, Hong et al. (2015) focus on summaries generated from different systems and use a non-neural system combination method to make their complementary advantages.",
"Few works explore if the complementarity existing in different scenarios could be utilized in a unified framework.",
"(ii) Base-Meta Learning Gap : parameterized models between two learning stages are relatively independent.",
"For example, Zhou et al. (2017) and Huang et al. (2020) adapt the seq2seq (Sutskever et al., 2014) framework as the meta model for combination, which takes the outputs of multiple base systems as a part of the inputs for machine translation.",
"As a result, there is no parameter sharing between the meta model and base systems as shown in Fig. 1, which prevents the meta model from fully utilizing the knowledge encoded in the base systems.",
"(iii) Train-Test Distribution Gap : regarding the meta-learning stage, there is a distribution gap between the training and test distributions.",
"Fig. 1 elucidates this phenomenon: the training distribution of Hypo differs from the test distribution of Hypo' .",
"Although both two are outputs from the base stage, Hypo would be more accurate (closer to gold summaries) since it is the output during the training phase.",
"In this work, we aim to address these limitations by proposing a general framework, named Refactor , which can not only serve as a base system to construct a summary by selecting sentences from the source document but also act as a meta system to select the best system output from multiple candidates.",
"The unification of base and meta systems allows them to share a set of parameters, thereby alleviating the Base-Meta learning gap.",
"Besides, we propose a pretrain-then-finetune paradigm for Refactor that mitigates the Train-Test distribution gap.",
"In practice, our proposed Refactor can be applied to different scenarios.",
"For example, as a meta system, it can be used for multiple system combination or single system re-ranking.",
"Our contributions can be briefly summarized as: (1) We dissect two major factors that influence the performance of two-stage learning when leveraging the complementarity among different systems:",
"(i) Base-Meta Learning Gap",
"(ii) Train-Test Distribution Gap; (2) We show these two types of gaps can be alleviated by promoting communication between the two stages in 4 , and therefore present a new paradigm where the base and meta learners are parameterized with shared parameters; (3) We have made comprehensive experiments (twenty-two top-scoring systems, four datasets).",
"In addition to achieving state-of-the-art results on CNN/DailyMail dataset (5) by a significant margin, the efficacy of the proposed Refactor opens up a thought-provoking direction for performance improvement: instead of pursuing a purely end-to-end system, a promising exploration is to incorporate different types of inductive biases stage-wisely with the same parameterized function.",
"Our experimental results demonstrate that there exists complementarity introduced by decoding algorithms (e.g. beam search) 5.5 or system combination 5.6 among the current state-of-the-art summarization systems, which can be effectively utilized by our model for boosting the system performance.",
"Existing works commonly design systems in an end-to-end fashion (Sutskever et al., 2014; Sukhbaatar et al., 2015), which, though effective, also proves to be insufficient in some scenarios (Glasmachers, 2017; Webb et al., 2019).",
"Instead of optimizing a system in an end-to-end fashion, one more flexible paradigm, stage-wise learning, is to break down the holistic process into different stages.",
"The basic idea is to incorporate different types of inductive biases stage-wisely and two typical examples are: Stacking and Reranking .",
"Stacking Stacking (a.k.a, Stacked Generalization) is a general method of using a high-level model to combine lower-level models to achieve greater predictive accuracy (Ting and Witten, 1997).",
"In NLP research, this method has been widely explored in machine translation (MT) task.",
"Traditionally, it is used to improve the performance of statistical MT systems (Gonzlez-Rubio et al., 2011; Watanabe and Sumita, 2011; Duh et al., 2011; Mizumoto and Matsumoto, 2016).",
"Some recent work (Zhou et al., 2017; Huang et al., 2020) also extends this method to neural MT where the meta model and base systems are all neural models.",
"There is a handful of works about system combination for summarization (Hong et al., 2015), in which a feature-based meta model is used for combining unsupervised text summarization systems.",
"Reranking Reranking is a technique to improve performance by reranking the output of an existing system, which has been widely used across different NLP tasks, such as constituency parsing (Collins and Koo, 2005; Huang, 2008), dependency parsing (Zhou et al., 2016; Do and Rehbein, 2020), semantic parsing (Ge and Mooney, 2006; Yin and Neubig, 2019), machine translation (Shen et al., 2004; Mizumoto and Matsumoto, 2016).",
"Comparing reranking and stacking , both of them involve two-stage learning and the first stage would provide multiple candidate outputs as the input for the second stage.",
"However, they differ in the way how multiple candidate outputs are generated at the first stage.",
"Specifically, reranking usually decodes k -most qualified results during inference, using one base system.",
"By contrast, stacking generates multiple outputs that are usually from different base systems.",
"In what follows, we detail how to formulate summarization as a two-stage learning task.",
"Base system The system in the base stage aims to generate a summary based on the input text.",
"Specifically, given a document D = { s 1 , , s n } with n sentences, we refer to C as a candidate summary of D generated by a summarization system, which can be parameterized in diverse forms: C = BASE ( D, T , S , base ) (1) where BASE ( , base ) represents a base system that can be instantiated either as an extractive model or abstractive model with a specific experimental setup: training method T , decoding strategy S .",
"Meta system In practice, different choices of parameterized function BASE ( ) , training method T and decoding strategy S commonly lead to different candidate summaries, C = { C 1 , , C k } , where C represents a set of different candidate summaries.",
"The goal of the meta system is to utilize complementarities among C by popular techniques, such as reranking and system combination.",
"Specifically, given a set of candidate summaries C , a meta system is used to re-construct a new candidate summary C C = META ( D, C , meta ) (2) where meta represents learnable parameters of the meta system.",
"Despite effectiveness of existing meta systems, they, as briefly mentioned in 1, suffer from two major problems:",
"(i) Base-Meta Learning Gap and",
"(ii) Train-Test Distribution Gap .",
"In this paper, we propose the model Refactor that unifies the goal of the base and meta systems by the view that a summary can be generated by selecting the best combination of document sentences.",
"Therefore, both base and meta systems aim to select an optimal candidate summary, and they only differ in how the candidate summary set is constructed.",
"For example, Refactor can be a base system when the candidate summary set C is formed by directly enumerating different combinations of document sentences and would be a meta system when C represents summaries from different systems.",
"This formulation is advantageous in two points: (1) No matter where a system selects (from document sentences or multiple system outputs), the chosen criteria that define a good summary are shared.",
"Therefore, the learning process of base and meta systems can be parameterized using a set of parameters, maximizing the information-sharing across two stages and mitigating the Base-Meta Learning Gap .",
"where REFACTOR ( , refactor ) is the Refactor model, and the candidate summaries C can be constructed in different ways.",
"(2) Additionally, learning to select candidate summaries from document sentences enables the system to see more diverse candidates with different distributions.",
"This is effective for solving the Train-Test Distribution Gap , where the distribution of the meta system outputs in training samples deviates from the test one.",
"Specifically, our proposed Refactor first learns to select candidate summaries from document sentences (pre-trained Refactor ) and then learns to select candidate summaries from different system outputs (fine-tuned Refactor ).",
"Pre-trained Refactor takes as input a document D = { s 1 , , s n } as well as a set of candidate summaries C = { C 1 , , C m } , which can be constructed by enumerating possible combinations of source sentences with heuristic pruning.",
"For example, an extractive system could be used to prune unlikely sentences to control the number of candidates.",
"REFACTOR ( , refactor ) is instantiated as a score function which quantifies the degree to which a candidate summary C i is matched with the source document D .",
"by a BERT (Devlin et al., 2019) model.",
"SCORE ( ) is a function that measures the similarity between a document and candidate summary.",
"Contextualized Similarity Function To instantiate SCORE ( ) , we follow the forms as mentioned in Zhang et al. (2019b); Zhao et al. (2019); Gao et al. (2020), which have shown superior performance on measuring semantic similarity between documents and summaries.",
"Specifically, SCORE ( ) is defined based on the greedy matching algorithm, which matches every word in one text sequence to the most similar word in another text sequence and vise versa.",
"Given the document embedding matrix D = (cid:104) d 1 , , d k (cid:105) and the candidate embedding matrix C = (cid:104) c 1 , , c l (cid:105) encoded by BERT, SCORE ( ) can be calculated as: SCORE ( D , C ) = 2 R( D , C ) P( D , C ) R( D , C ) + P( D , C ) (5) where the weighted recall R , precision P are defined as follows: 1 R( D , C ) = (cid:80) i w i max j cos( d i , c j ) (cid:80) i w i + 1 , (6) P( D , C ) = (cid:80) j max i cos( d i , c j ) l + 1 , (7) w i is the weight of the i -th token in the document.",
"We use weighted recall R based on the assumption that for text summarization, tokens in the source document have different importance and the summary should capture the most important information of the source document.",
"Therefore, we introduce a weighting module built by a two-layer Transformer (Vaswani et al., 2017) assigning weights w i : w i = exp (dot( d i , d 0 ) / d ) (cid:80) j exp(dot( d j , d 0 ) / d ) , (8) where D = Transformer( D ) and d 0 = D [0] represents the embedding of the [CLS] token which encodes the global information.",
"d is the dimension of d i .",
"Learning Objective We use a ranking loss to learn the parameter refactor , inspired by the assumption (Zhong et al., 2020) that a good candidate summary should be as close with the source 1 We found that adding 1 to the precision and recall helps to stabilize the training.",
"document as possible.",
"Formally, L = (cid:88) i (cid:88) j>i max(0 , SCORE ( D , C j ) SCORE ( D , C i ) + ( j i ) c ) (9) where C i and C j denote the i -th and j -th sample of the candidate list which is descendingly sorted by the ROUGE (Lin, 2004) scores between the reference summary C and candidates.",
"That is, ROUGE( C i , C ) > ROUGE( C j , C ) for i < j .",
"c is the corresponding margin set to 0 .",
"01 .",
"In order to fit the distributions of the specific types of input, we then fine-tune Refactor using the outputs generated by the base systems.",
"Specifically, fine-tuning is also based on Eq.",
"9 where the candidate summaries C are generated by the base systems under different application scenarios.",
"Why does Pre-train and Fine-tune matter?",
"We elaborate on the proposed two-step training using a real case.",
"Fig. 2 depicts the distribution of ROUGE-1 scores regarding the candidate summaries in the pre-training stage training set, fine-tuning stage training set and test set on the XSum dataset, where we sample the same number of { document , candidate summaries } pairs.",
"We can observe that:",
"(i) there is a distribution gap between train and test samples in fine-tuning stage.",
"(ii) in pre-training stage the pre-trained Refactor has seen a large number of candidate summaries with diverse performance (ROUGE value), which improves its generalization ability.",
"In 5 we will show that the Pre-train and Fine-tune paradigm outperforms one-step training where the model is directly trained with data generated from the base systems.",
"Our Refactor can be used as different roles in different scenarios as follows.",
"The pre-trained Refactor can not only be fine-tuned for a better selection of candidate summaries, but also be regarded as a base system, providing one system output.",
"This feature of Refactor maximizes parameter sharing across the two training stages.",
"Both pre-trained Refactor and fine-tuned Refactor can be used as a meta system to select the best candidate when we have multiple system summaries.",
"In this work, we explore the following settings: (1) Single System : It considers re-ranking candidate summaries generated from a single abstractive system using beam search.",
"(2) Multi-system Summary-level : It is tasked to select the best candidate summary from the results of different systems.",
"(3) Multi-system Sentence-level : We also take a step towards the fine-grained fusion of summaries from extractive and abstractive systems.",
"Specifi-cally, here candidate summaries are generated by combining the results of different systems at the sentence level.",
"We mainly experiment on four datasets, whose statistics are shown in Tab.",
"1. CNNDM 2 (Hermann et al., 2015) is a widely used dataset containing news articles and the associated highlights which are used as the reference summaries.",
"We follow the work of Nallapati et al. (2016) for data preprocessing.",
"XSum 3 (Narayan et al., 2018a) contains online articles collected from BBC with highly abstractive one-sentence summaries.",
"PubMed 4 (Cohan et al., 2018) contains scientific papers collected from PubMed.com.",
"WikiHow 5 (Koupaee and Wang, 2018) is a large-scale dataset constructed from the articles using online WikiHow knowledge base.",
"Below, we mainly use BART, GSum and PEGASUS as the base systems since they have achieved state-of-the-art performance on at least one dataset.",
"BART (Lewis et al., 2020) is a large pre-trained sequence-to-sequence model that achieves strong performance on the abstractive summarization.",
"GSum (Dou et al., 2020) enhances the performance of BART using additional guidance information, which achieves the current state-of-the-art performance on the CNNDM dataset.",
"PEGASUS (Zhang et al., 2020) achieves competitive performance on various summarization datasets and is the current state-of-the-art on the XSum dataset.",
"To make a comprehensive evaluation of our proposed model, we additionally collect 19 top-scoring systems as base systems on CNNDM .",
"6 In details, for 5.7 we use the following systems: pointer-generator+coverage (See et al., 2017), REFRESH (Narayan et al., 2018b), fastAbsRL-rank (Chen and Bansal, 2018), CNN-LSTM-BiClassifier (Kedzie et al., 2018), CNN-Transformer-BiClassifier (Zhong et al., 2019), CNN-Transformer-Pointer (Zhong et al., 2019), BERT-Transformer-Pointer (Zhong et al., 2019), Bottom-Up (Gehrmann et al., 2018), NeuSum (Zhou et al., 2018), BanditSum (Dong et al., 2018), twoStageRL (Zhang et al., 2019a), pre-SummAbs (Liu and Lapata, 2019), preSummAbs-ext (Liu and Lapata, 2019), HeterGraph (Wang et al., 2020), MatchSum (Zhong et al., 2020), Unilm-v1 (Dong et al., 2019), Unilm-v2 (Dong et al., 2019), T5 (Raffel et al., 2020).",
"Neural system combinator : We use BERTScore (Zhang et al., 2019b) as an unsupervised baseline with neural models, which is an automatic evaluation metric computing the similarity of text pairs based on the corresponding BERT-encoded representations.",
"We use it to directly compute the similarity score between the source documents and candidate summaries.",
"Non-Neural system combinator : We use RankSVM 7 (Joachims, 2002) as a non-neural baseline.",
"We perform cross-validation on the development set for hyper-parameter searching and train the model on the development set.",
"The set of features is listed in Appendix A. Oracles : We compare our model with sample-wise Min , Max and Random oracles using ROUGE.",
"For the following experiments in 5.5, 5.6 and 5.7 on CNNDM , we pre-train the Refactor model with a candidate set generated by enumerating combinations of sentences in the source documents.",
"To reduce the number of candidates, we prune the sentences assigned with lower scores by an extractive model, BERTSum (Liu and Lapata, 2019), following Zhong et al. (2020).",
"The maximum number of candidates for one data sample is 20.",
"The pretrained Refactor is also used a base system in 5.6, whose outputs are used together with other base systems as candidate summaries.",
"For different experiments, we fine-tune pre-trained Refactor on the base system's output, and name the model as fine-tuned Refactor .",
"To analyze the effectiveness of the proposed two-stage training, we additionally train the model without the pre-training step, which is named as supervised Refactor .",
"The pre-trained BERT model we used is from Transformers library (Wolf et al., 2020).",
"8 We use Adam optimizer (Kingma and Ba, 2015) with learning rate scheduling.",
"lr = 0 .",
"002 min(step _ num 0 .",
"5 , (10) step _ num warmup _ steps 1 .",
"5 ) , where the warmup _ steps is 10000.",
"The model performance on the validation set is used to select the checkpoint.",
"Pre-training takes around 40 hours 7 http://www.cs.cornell.edu/people/tj/ svm_light/svm_rank.html 8 We use the bert-base-uncased' version with 110M parameters.",
"We use BART and GSum for this experiment, and use beam search to generate the candidate summaries where the beam size is set to 4.",
"The results are listed in Tab.",
"2, which shows that (1) Refactor can boost the base system's performance by a significant margin, (2) the fine-tuned Refactor outperforms supervised Refactor directly trained on the base system's outputs, showing the effectiveness of the two-step training.",
"Notably, we observe the fine-tuned Refactor can boost BART's performance from 44.26 to 45.15 on ROUGE-1, indicating that the top1 output selected by beam search is not always the best one, and Refactor can effectively utilize the complementarity introduced by considering all the beam search results.",
"Summary-level For summary-level combination, we explore two-system combination (BART & pre-trained Refactor ) and three-system combination (BART, GSum & pre-trained Refactor ).",
"The results are shown in Tab.",
"3. Sentence-level For sentence-level combination, we use BART and pre-trained Refactor as the base Setting Method R-1 R-2 R-L Base BART 44.26 21.12 41.16 Refactor 44.13 20.51 40.29 GSum 45.93 22.30 42.68 Two Min 40.40 17.64 37.12 Max 47.99 23.99 44.33 Random 44.25 20.87 40.78 BERTScore 43.95 20.45 40.23 RankSVM 44.66 21.32 41.44 Supervised 44.75 21.40 41.47 Pre-trained 44.66 21.19 41.15 Fine-tuned 45.04 21.61 41.72 Three Min 39.51 17.01 36.35 Max 49.94 25.59 46.30 Random 44.82 21.35 41.44 BERTScore 44.10 20.64 40.42 RankSVM 45.72 22.13 42.58 Supervised 45.80 22.25 42.68 Pre-trained 45.27 21.74 41.93 Fine-tuned 46.12 22.46 42.92 Table 3: Summary level combination on CNNDM .",
"systems.",
"The sentences of each system's output are merged together to form the candidate sentence set, and all combinations of three sentences in the candidate set are generated as candidate summaries.",
"To prune the candidates, we use tri-gram blocking to filter out candidates of which there exists an identical tri-gram in two sentences.",
"The average number of candidates in the test set is 15.8.",
"The results are shown in Tab.",
"4. We have the following observations: (1) the pretrained Refactor can already outperform the base systems, and (2) fine-tuning can further improve the performance.",
"Meanwhile, we notice there are two exceptions:",
"(i) For sentence-level combina-bin #sys Max Min Rand Best Ours 39-40 3 45.28 34.30 39.88 39.98 40.45 41-42 8 50.14 32.65 41.44 41.89 43.20 42-43 3 47.37 36.79 42.10 42.27 43.38 43-44 2 47.60 39.63 43.58 43.97 44.07 44-45 3 50.29 38.66 44.58 44.68 45.29 Table 5: Multiple system combination.",
"tion, supervised Refactor has similar performance as fine-tuned Refactor .",
"We hypothesis that this is because here the number of candidates in the fine-tuning data is relatively large, therefore directly training on the fine-tuning data is sufficient enough.",
"(ii) The pre-trained Refactor cannot outperform GSum model in the three-system combination setting in Tab.",
"3. The reason might be that GSum has much stronger performance than the other two systems, which intuitively makes the expected gain from system combination lower than other settings.",
"To evaluate the Refactor 's generalization ability, we explore another setting where the pre-trained Refactor is directly used to select the outputs of multiple systems without fine-tuning.",
"To this end, we collect 19 top-performing summarization systems on CNNDM dataset.",
"Here, we investigate if our Refactor can boost the performance of candidate systems with similar performance.",
"In addition, we also aim to investigate how the range width of different systems' performance affects Refactor 's performance.",
"Therefore, we group the candidate systems into equal-width bins based on their average ROUGE-1 scores, and evaluate our Refactor on each bin separately.",
"In Tab.",
"5 we report the average ROUGE-1 scores of the oracles, Refactor , and the best candidate system in each bin whose width is 1. Refactor consistently outperforms the best candidate system, showing its generalization ability.",
"Next, in Fig. 3 we plot the change of Refactor 's performance with different bin widths.",
"We define the success rate of Refactor with a given bin width to be the number of bins where Refactor outperforms the single best base system normalized by the total number of bins.",
"We observe that Refactor is more likely to improve the performance of base systems when the system-level performance of the Method XSum PubMed WikiHow R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L Base 47.12 24.46 39.04 43.42 15.32 39.21 41.98 18.09 40.53 Min 42.45 20.50 35.19 39.60 13.57 35.53 40.55 17.40 39.18 Max 51.51 28.04 42.70 45.23 16.72 40.67 43.00 18.44 41.44 Random 46.98 24.08 38.88 42.39 15.12 38.08 41.77 17.92 40.33 BERTScore 47.13 24.04 38.89 43.64 15.40 39.41 41.77 17.93 40.29 RankSVM 46.85 24.31 39.09 43.63 15.34 39.46 42.00 18.08 40.57 Pre-trained 47.45 24.55 39.41 43.58 15.36 39.38 41.97 18.03 40.52 Fine-tuned 47.32 24.31 39.22 43.72 15.41 39.51 42.12 18.13 40.66 Table 6: Single system reranking on other datasets.",
"base systems is similar.",
"Intuitively, if one base system is significantly better than the other systems, it is more difficult for Refactor to use other systems to complement the best base system.",
"Next, we move on to other text summarization datasets to evaluate our proposed method's strength beyond CNNDM dataset.",
"Some of the datasets used here are not as well-studied as CNNDM dataset, so there are less top-performing systems on these datasets.",
"Therefore, here we focus on the experiments of the single system setting.",
"Setup Regarding the pre-trained Refactor , we use an extractive oracle to select document sentences and use the combinations of these sentences as candidates.",
"In addition, since on Xsum the abstractive systems outperform extractive systems by a large margin, we use a pre-trained BART model with Diverse Beam Search (Vijayakumar et al., 2018) to generate 16 candidates per sample for pre-training.",
"Regarding system re-ranking, we use BART as the base system to generate the candidate summaries except on Xsum dataset, where we use PEGASUS since it achieves better performance.",
"Similar to 5.5, we use the outputs of beam search as the candidates.",
"We select the first 4 outputs as the candidates.",
"The results in Tab.",
"6 show that Refactor is able to bring stable improvement over the base systems.",
"The average summary length of these datasets varies from 23.3 ( XSum ) to 210.3 ( Pubmed ).",
"Therefore, the results here demonstrate the Refactor can be applied to datasets with different characteristics.",
"On XSum dataset, the pre-trained Refactor outperforms the fine-tuned Refactor .",
"This may result from the additional pre-training data we introduced using BART, which is effective enough to train the Refactor for reranking PEGASUS output.",
"Setup We choose the summary-level system combination setting on CNNDM test set in 5.6 as a case study, where the base systems are: BART and pre-trained Refactor , and then we use a fine-tuned Refactor 9 to combine them.",
"Specifically, we first",
"(i) define ( CBART , C Pretrain ) as the performance (i.e., ROUGE) gap on the candidate summary C .",
"(ii) then partition test samples into different buckets S 1 , , S n according to the performance gap .",
"(iii) calculate selection accuracy for each bucket, which represents how accurately the Refactor can 9 As introduced in 4.4, Refactor could be used as either a base system or a system combinator.",
"The results are shown in Fig. 4. We observe that the selection accuracy is increasing as the gap becoming larger, indicating that Refactor performs better on the candidate summaries with diverse performance.",
"Combining the results we get in 5.7, we conclude that Refactor has the largest potential gain when the base systems effectively complement each other They have similar system-level performance but diverse summary-level performance.",
"For example, each base system may perform significantly better than others on a subset of data with different characteristics but could not outperform others across the whole dataset.",
"We present a general framework for utilizing the complementarity of modern text summarization systems by formulating text summarization as a two-stage learning problem.",
"Our proposed model, Refactor , can be used either as a base system or a meta system, effectively mitigating the learning gaps introduced in the two-stage learning.",
"Experimental results show that Refactor is able to boost the performance of the base systems, and achieves the state-of-the-art performance on CNNDM and XSum datasets.",
"We believe this work opens up a new direction for improving the performance of text summarization systems apart from an iterative process of searching for better model architectures The gain of performance could be made by fully investigating and utilizing the complementarity of different systems with various architectures, problem formulations, decoding strategies, etc.",
"We thank Professor Graham Neubig and anonymous reviewers for valuable feedback and helpful suggestions.",
"This work was supported in part by a grant under the Northrop Grumman SOTE-RIA project and the Air Force Research Laboratory under agreement number FA8750-19-2-0200.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government."
] | [
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"other",
"other",
"abstain",
"method",
"other",
"other",
"method",
"objective",
"other",
"method",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"In this paper, we propose a neural model EPT-X (Expression-Pointer Transformer with Ex-planations), which utilizes natural language explanations to solve an algebraic word problem.",
"To enhance the explainability of the encoding process of a neural model, EPT-X adopts the concepts of plausibility and faithfulness which are drawn from math word problem solving strategies by humans.",
"A plausible explanation is one that includes contextual information for the numbers and variables that appear in a given math word problem.",
"A faithful explanation is one that accurately represents the reasoning process behind the model's solution equation.",
"The EPT-X model yields an average baseline performance of 69.59% on our PEN dataset and produces explanations with quality that is comparable to human output.",
"The contribution of this work is two-fold.",
"(1) EPT-X model: An explainable neural model that sets a baseline for algebraic word problem solving task, in terms of model's correctness, plausibility, and faithfulness.",
"(2) New dataset: We release a novel dataset PEN (Problems with Explanations for Numbers), which expands the existing datasets by attaching explanations to each number/variable.",
"Algebraic word problem solving is a challenging task for understanding natural language.",
"As shown in Table 1, a model needs to interpret a word problem into a solution equation to solve the problem.",
"Recent neural approaches have employed encoder-decoder architecture to tackle this task and achieved remarkable answer correctness (Huang et al., 2018; Chiang and Chen, 2019; Amini et al., 2019; Kim et al., 2020; Ki et al., 2020): ranging from 65% to 84% depending on datasets.",
"So, as the model delivers a plausible answer, researchers have a firm belief that an encoder component of a neural model can comprehend the problem correctly.",
"However, this belief has less been verified due to Q. Tom has 12 coins in quarters and nickels.",
"Our novel model, Expression-Pointer Transformer with Explanations (EPT-X), is inspired by some pedagogical studies about human strategies on understanding an algebraic word problem (Conway and Polya, 1985; Jitendra et al., 2007; Montague, 2008; Jitendra and Star, 2012).",
"In classrooms, teachers ask students to make an explanation or a diagram that depicts the role of each number written in the problem.",
"Then, students use these explanations for numbers to build a correct equation.",
"That is, understanding a problem produces explanations that satisfy the following two criteria.",
"(1) plausibility : A plausible explanation is one that includes contextual information for the numbers and variables that appear in a given math word problem.",
"Especially, as humans recognize each number/variable individually, the explanations should reveal what each number or variable represents in the context of the given problem.",
"(2) faithfulness : A faithful explanation is one that accurately represents the reasoning process behind the model's solution equation (Jacovi and Goldberg, 2020).",
"In other words, they should imply a reason behind selecting operators or operands.",
"To reflect these two criteria in EPT-X, we adopt a two-phase architecture: (1) explaining num-bers/variables and (2) building solution equations.",
"Though Ling et al. (2017) attempted to generate explanations, this work is different from ours in that their model focused on explaining the decoding process.",
"So, they have less explored the above two criteria.",
"In contrast, this paper attempts to explain how the model understands the given word problem by modifying an encoder component of a neural model.",
"As humans successfully solve word problems by explaining their understanding, we expect our EPT-X model to achieve a good performance in terms of three criteria: correctness of equations, plausibility of explanations, and faithfulness of explanations.",
"Through several analyses, our paper shows the following two contributions: 1. EPT-X model: We propose a baseline model that can generate explanations and solve algebraic word problems, in terms of correctness, plausibility, and faithfulness.",
"2. New dataset: We release a novel dataset PEN (Problems with Explanations for Numbers), which expands the existing datasets by attaching explanations to each number/variable.",
"Correctness: Researchers have attempted to build a model that solves word problems.",
"Early attempts used hand-crafted features collected by experts to make a model understand a word problem (Kushman et al., 2014; Roy and Roth, 2015; Koncel-Kedziorski et al., 2015; Zhou et al., 2015; Upadhyay et al., 2016; Roy and Roth, 2017).",
"Although researchers can interpret these models using the features, extending these studies to other datasets is limited as designing features is labor-intensive.",
"On the other hand, recent studies have employed neural models (Wang et al., 2017; Huang et al., 2018; Chiang and Chen, 2019; Amini et al., 2019; Kim et al., 2020; Ki et al., 2020) and achieved answer correctness ranging from 62% to 84%.",
"Though their extensibility is better than handcrafted features, it becomes harder to interpret how a neural model understands a word problem.",
"Plausibility: To make a neural model that explains its reasoning process, Ling et al. (2017) built a model that outputs both a computation process and a rationale behind the process.",
"Though their model generated a natural language phrase that explains a computation step in advance, the model is not enough to meet the plausibility criterion because of two issues.",
"First, it is not guaranteed whether their model explains all numbers and variables required to solve the problem.",
"As they focused more on explaining the model's computation, their model often skips explaining its understanding of numbers and variables stated in a problem.",
"Second, it is not confirmed whether their model generates rationale comparable to that of humans.",
"Though they measured their quality of rationale using BLEU-4 (Papineni et al., 2002), the reported score of 27.2 is somewhat low and not compared with any human-level performance.",
"We suspect that this low-quality explanation affected the low correctness of their model: 36.4%.",
"Therefore, it is worthwhile to build a new model that fulfills the plausibility criterion.",
"Faithfulness: As far as we know, studies on solving algebraic word problems have not measured the faithfulness of a generated explanation.",
"Existing studies so far measured the quality of explanations using plausibility only.",
"Following Jacovi and Goldberg (2020), we define faithful explanation as one that accurately represents the reasoning process behind the model's solution equation.",
"Humans expect an explanation to be faithful.",
"However, a model can generate an explanation that may not be related to the equations (Jacovi and Goldberg, 2020); it can generate random plausible sentences independently from the process of generating solution equations.",
"Therefore, measuring faithfulness is meaningful in that a highly faithful explanation reflects a solution equation generation process that is expected by human problem solvers.",
"The proposed model, Expression-Pointer Transformer with Explanations (EPT-X) 1 , is a variant of Expression-Pointer Transformer (EPT; Kim et al. 2020), which is state-of-the-art correctness model.",
"Figure 1 depicts the two phases EPT-X model.",
"(1) Plausibility : In phase 1, EPT-X receives a problem text as an input and generates explanations for each number/variable.",
"The number of variables is also predicted in this phase.",
"(2) Faithfulness : In phase 2, EPT-X receives both the original problem and the generated explanations as inputs and then builds an equation using EPT.",
"To jointly train these two phases, we add up the loss functions for the number 1 http://github.com/snucclab/ept-x 4443 Figure 1: The two-phase pipeline of generating explanations and equations in our EPT-X model.",
"of variables, explanations, and equations; all three use smoothed cross-entropy (Szegedy et al., 2016) with = 0 .",
"01 .",
"Phase 1 is a three-step procedure for generating explanations as shown in the top part of Figure 1. Phase 1 contains two components: text encoder and explanation decoder.",
"Step 1-1.",
"Compute problem text vectors The text encoder receives a natural language problem as an input and computes problem context vectors.",
"To utilize world knowledge in the computation process, we used ELECTRA (Clark et al., 2020), a pre-trained language model.",
"After applying the text encoder, we obtain the problem context vector w s for each token w s in the given problem.",
"Step 1-2.",
"Predict the number of variables Using the problem context vectors, EPT-X predicts the number of required variables N to solve the given problem.",
"Using the first token's problem context vector w 0 , we compute the probability distribution of N as follows: P ( N ) = softmax (FF n 1 (ReLU (FF n 2 ( w 0 )))) , where FF( ) indicates the feed-forward layer.",
"Step 1-3.",
"Generating plausible explanations The explanation decoder then produces explanations using problem context vectors as memories.",
"The decoder uses a Transformer (Vaswani et al., 2017) decoder and a pointer-generator network (See et al., 2017).",
"Before predicting the next explanation token x t +1 , the Transformer decoder computes a hidden state h t based on the problem context vectors w s and previously generated explanation tokens x 1 , , x t .",
"To utilize world knowledge in generating explanations, we adopt Rothe et al. (2020) and use ELECTRA (Clark et al., 2020) as the initial weight.",
"The pointer-generator head receives the computed h t and predicts the next token.",
"Let p g , P v , and P c be the probability of using the generated word, the probability of generating from the vocabulary, and the probability of copying from the problem, respectively.",
"Then, the next token x t +1 is predicted as follows: x t +1 = arg max p g P v ( ) + (1 p g ) P c ( ) , p g = (FF g ( w t h t E( x t 1 ))) , P v ( ) = softmax (FF v ( h t )) , P c ( ) = (cid:80) w s : w s = attn( w s , h t ) , w t = (cid:80) w s attn( w s , h t ) , where ( ) , E( ) , and attn( ) indicate the sigmoid, embedding, and single-head attention scoring function, respectively.",
"And indicates concatenation of vectors.",
"Plausibility of explanation is implemented during this stage by generating an explanation for each number/variable separately.",
"We use unique initial input values for all numbers and variables.",
"This method has been used in other studies to bind the decoder to a specific context (Raffel et al., 2020; 4444 Keskar et al., 2019).",
"For numbers, instead of using the initial input value [CLS] ' of the Transformer decoder, we use the input [CLS] explain: context [SEP] , where the context part depends on the number or variable.",
"For the numbers, we use a window of tokens that are near the given number token.",
"For example, if the window size is three, we use three tokens placed before and after the given token.",
"For variables, we use the variable index because variables do not appear in the problem.",
"So, for example, the initial input value of the n th variable becomes [CLS] explain: variable n [SEP] . 3.2 Phase 2. Building solution equations Phase 2 is a three-step procedure for producing equations as shown in the bottom part of Figure 1. Phase 2 uses the same text encoder from Phase 1. Step 2-1.",
"Recombine explanations Inspired by human paraphrasing strategies (Conway and Polya, 1985; Gagnon and Maccini, 2001; Montague, 2008), EPT-X paraphrases the original problem by recombining its understanding.",
"First, the model places each explanation and the corresponding number token value into a sentence: explanation is a number value . for numbers and What is explanation ? for variables.",
"Then, EPT-X creates a recombined problem by concatenating these paraphrased sentences.",
"We randomly recombined one of the reference explanations in the training process as EPT-X may not generate explanations ideally.",
"Step 2-2.",
"Compute recombined context vectors The text encoder once again receives both the original problem and the recombined problem as inputs and computes the recombined context vectors r i for each input token r i .",
"We designed EPT-X to use both problems for two reasons.",
"First, using the original problem can avoid information loss.",
"Second, using the recombined problem can make the equation decoder exploit the information of explanations.",
"We arrange these two problems into the text encoder as follows: [CLS] original [SEP] recombined [SEP] .",
"Step 2-3.",
"Generate equations faithfully The equation decoder then produces equations using the recombined context vectors as memories.",
"Following the EPT model (Kim et al., 2020), the decoder produces equations using expression tokens, each of which is a tuple of an operator and relevant operands.",
"So, the equation decoder predicts the next j th expression as follows.",
"First, the decoder receives expression tokens generated so far and converts them into embedding vectors v k ( k = 0 , , j 1) .",
"Then, using these embedding vectors v k and recombined context vectors r i , the decoder builds an equation context vector q j for the next expression.",
"Lastly, the decoder simultaneously predicts the next operator and its required operands using q j .",
"Thus, when we translate expressions into an equation, we can compute an answer to a problem.",
"The faithfulness of explanation is implemented during this stage by using explanations as the input data source.",
"We change the input format of numbers and variables in EPT's equation decoder to use explanations.",
"Originally, EPT used different types of vectors to input them: the encoder's hidden state for each known number and the decoder's hidden state for each unknown variable.",
"However, in EPT-X, we guide the model to utilize the information from the explanation when writing an equation.",
"As all numbers and variables appear in the recombined problem, EPT-X uses the vector r i corresponding to each number/variable.",
"We release Problems with Explanations for Num-bers' (PEN) 2 , an algebraic word problem dataset with problem texts, equations, and explanations of numbers/variables for each problem to train and evaluate EPT-X.",
"As existing datasets for algebraic word problems do not contain explanations, we provided explanations on the existing three benchmark datasets on solving algebraic word problems 3 : ALG514 (Kushman et al., 2014), DRAW-1K (Upadhyay and Chang, 2017), and MAWPS (Koncel-Kedziorski et al., 2016).",
"The following sections introduce the two stages of building PEN: preparation for correcting errors and annotation for collecting explanations.",
"We corrected the errors and organized the data in three steps.",
"In the first step, we revised the problems' typos, grammatical errors, and logical flaws.",
"For example, we found a problem asking 2 http://github.com/snucclab/pen 3 Though we considered using AQuA-RAT (Ling et al., 2017), which includes rationale about computation, we found that using it is intractable since we have to re-collect explanations for numbers and variables in most problems.",
"about Senators' after telling a story about the House of Representatives.' So we replaced the out-of-context term with the other one.",
"Second, we extracted numeric words from the modified text using WordNet (Fellbaum, 1998); Arabic numerals, fractions, ordinals, and their synonyms were extracted.",
"Third, to normalize equations, we re-formulated them according to nine source formulas organized by Mayer (1981) and four formulas organized by Carpenter et al. (1996).",
"Among 3,886 problems from the three datasets, we corrected 3,581 problems in the PEN dataset.",
"We excluded 305 problems because they are (1) exact duplicates of others (303 problems) 4 or (2) not an algebra problem (2 problems) 5 .",
"After excluding 305 problems, we further revised incorrect equations: 62 of the 3,581 problems (1.73%).",
"When collecting natural language explanations, the explanations can be irrelevant to the given problem without any guidelines.",
"Thus, we instructed our workers to follow eight rules, including Use at least one word appearing in the problem text when writing an explanation.",
"Moreover, we make workers obey the rules consistently using a web-based system.",
"Details about all eight rules and the web-based system are illustrated in Appendix A. Fourteen skilled workers provided explanations for numbers and variables in a problem.",
"Before assigning workloads, we split the entire dataset into training (80%), development (10%), and test (10%) sets.",
"Then, we collected multiple explanations for each problem; 3 for training set and 4 for the other.",
"Table 2 shows the statistics of the PEN dataset.",
"4 Since we manually corrected errors and flaws in each problem and combined three different datasets, some problems become exact duplicates of other problems.",
"5 These problems cannot be solved with a multivariate equation alone: problems about least common multiples or counting the number of cases.",
"To verify whether the EPT-X model can solve an algebraic problem correctly while generating plausible and faithful explanations, we conduct three types of analyses: model performance analysis, quantitative error analysis, and qualitative output analysis.",
"This section illustrates each analysis and further implementation details.",
"The model performance analysis measures the model's correctness, which is the percentage of correctly answered problems on the PEN dataset.",
"We regard an answer to be correct only if the answer values of all the variables in the problem are paired and solved correctly.",
"For example, Table 1 shows that a correct answer contains two variables and answer values of x = 8 and y = 4 .",
"Existing studies regarded x = 4 , y = 8 to be a correct solution(Kushman et al., 2014; Kim et al., 2020; Lee and Gweon, 2020).",
"However, in the context of generating explanations along with solutions, different explanations are generated with different variables (Conway and Polya, 1985; Montague, 2008).",
"Therefore, we enforce a stricter constraint that requires that a variable should be matched with a correct answer value.",
"Using the correctness, we compare the EPT-X model with two previous inexplainable models (EPT (Kim et al., 2020), GEO (Lee and Gweon, 2020)) and human performance.",
"The EPT is a model that generates one expression at a time and uses pointers instead of classifiers, and it achieved state-of-the-art accuracy on MAWPS and DRAW datasets.",
"The GEO is a model that mixes encoder and decoder outputs before predicting a token, and it achieved state-of-the-art accuracy on DRAW and ALG514 datasets.",
"To establish a human performance baseline, our research team manually checked for the answer correctness of the original datasets of ALG514, DRAW, and MAWPS.",
"We found that 62 of the 3581 problems were incorrectly solved, thus yielding a human baseline performance of 98%.",
"We conduct four types of error analyses to understand the possible cause of EPT-X solution errors:",
"plausibility test, faithfulness test, faithfulness control test, and error propagation test.",
"Through these four types of analyses, we show how the generated explanations by the EPT-X model can be used to understand the equation generation process.",
"To test the plausibility of explanations, we used BLEU-4 (Papineni et al., 2002), ROUGE-L (Lin, 2004), CIDEr (Vedantam et al., 2015), and BLEURT (Sellam et al., 2020).",
"These metrics can measure the extent of similarity between a generated explanation and the reference explanation in the dataset.",
"Using the above four metrics, we compare the EPT-X model with a baseline model and human performance.",
"First, we compare EPT-X with a baseline model which only contains Phase 1 (P1-only).",
"This model can generate an explanation for each number/variable but cannot solve a word problem.",
"Second, we compared EPT-X with human performance.",
"While collecting explanations for each number/variable, we collected four sets of explanations.",
"Of these four, one set is randomly set aside to serve as a hypothesis sentence and the other three as reference sentences when measuring human performance.",
"To measure the faithfulness of EPT-X, we used two metrics: sufficiency and comprehensiveness (DeYoung et al., 2020).",
"First, in our context, comprehensiveness means were explanations (Step 13) needed to produce the solution equation (Step 2-3)?.",
"Figure 2",
"(a) shows the measurement setup for comprehensiveness.",
"Specifically, we examined the amount of change between the two output solution equations: the equation from the original Phase 2 setup and the equation that is generated with only the problem text for the input of Phase 2. Since the output equation is not a single prediction as in DeYoung et al. (2020), we measured the change in the solution equations using tree edit distance (Zhang and Shasha, 1989).",
"Secondly, in our context, sufficiency means do explanations (Step 1-3) contain enough information to produce solution equation (Step 2-3)?.",
"Figure 2",
"(b) shows the measurement setup for sufficiency.",
"Here, we examined the amount of change between the two output solution equations: the equation from the original Phase 2 setup and the equation that is generated with only the generated explanation.",
"Similar to the comprehensiveness measure, the difference in equations was also computed using tree edit distance.",
"To provide a statistical baseline for interpreting the two metrics of comprehensiveness and sufficiency, we adopted a bootstrapping method (Koehn, 2004).",
"We sampled 500 bootstrapped samples (each sample has 50 problems) to estimate the population distribution of each metric.",
"After the estimation, we conducted hypothesis testing for each metric.",
"For comprehensiveness C , we set the following hypothesis HA : C > 1 as we expect to observe changes in equations when using only the problem input compared to using both problem and explanation input.",
"For sufficiency S , we set the following hypothesis HA : S < 1 as we expect to observe no change in equations when only using the explanation input.",
"To examine a trade-off relationship between correctness and faithfulness, we train and analyze two variants of EPT-X, whose faithfulness is controlled.",
"The first model is an inherently faithful model (EPT-XF) that uses only the explanation, but not the original problem, as an input to Phase 2. As EPT-XF entirely depends on the explanation to generate a solution equation, the model passes the test of faithfulness by definition.",
"The second model is an inherently unfaithful model (EPT-XU) that uses only the original problem, but not the explanation from phase 1, as an input to Phase 2. As EPT-XU ignores the explanation input, the model fails the test of faithfulness by definition.",
"Implementation details on these two models are explained in Appendix B. 4447 5.2.4 Error propagation test To examine how the quality of explanation affects the model's correctness, we used two models, EPT-X and EPT-XF.",
"Both models employ a two-phase architecture, thus they are prone to errors in both phases.",
"For the error propagation test, we examine how the performance of Phase 1 (plausibility) affects the end-task performance (correctness).",
"Note that testing EPT-X may not reveal the errors that are solely propagated from the generated explanation because EPT-X also uses the original problem as an input.",
"Therefore, EPT-XF performance was also measured in order to examine the impact of errors from the generated explanation only.",
"We measured the amount of error propagation in the two models, EPT-X and EPT-XF, by comparing correctness under two conditions: control and experiment.",
"Under the control condition, the models build solution equations based on explanations generated by themselves.",
"On the other hand, under the experiment condition, they build solution equations based on the gold standard explanations.",
"Then, we measure the change of correctness between these two conditions for each model.",
"Here, we expect that the change to reveal the proportion of problems affected by errors that are propagated from Phase 1. 5.3 Qualitative output analysis The explanations generated by EPT-X were analyzed qualitatively using two methods.",
"First, to measure the quality of the generated explanation itself, we manually labeled the quality in the PEN's development set using two criteria: (1) plausibility and (2) faithfulness .",
"Human coders were asked to label an explanation to be plausible when the explanation and the original problem text are coherent in meaning.",
"And for faithfulness, we asked human coders to build a solution equation using only the explanation produced from the EPT-X model.",
"If the generated solution equation is identical to the EPT-X generated solution equation, the explanation is labeled to be faithful.",
"Second, to find the primary cause of errors when generating an explanation, we manually classified errors in EPT-X's explanations.",
"The errors were categorized by comparing the generated explanation with the gold-standard explanation.",
"We also used the PEN development set for this analysis.",
"We describe three major implementation details used for training EPT-X: encoder, optimizer, and training epochs.",
"For the text encoder component, EPT-X uses the base version of ELECTRA (Clark et al., 2020).",
"We fixed its embedding and tied the embedding with the weights of FF v in the explanation decoder to preserve the world knowledge in the embedding and to stabilize the training procedure.",
"For the optimizer, we used LAMB (You et al., 2020) with a learning rate of 0.00176, which was found from a grid search on the development set.",
"Finally, for the training epochs, we trained EPT-X for 500 epochs.",
"Appendix C lists additional details of the model, including hardware, software, libraries, hyper-parameters, and random seeds.",
"The result of three analyses reveals that the EPT-X can generate an equation correctly based on a plausible and faithful explanation.",
"First, Section 6.1 presents the result of the model performance analysis, which shows that EPT-X can achieve correctness 5% lower than previous inexplainable models.",
"Second, Section 6.2 shows the result of error analysis, which reveals that many of EPT-X's errors are due to insufficient explanations.",
"And lastly, Section 6.3 shows the result of qualitative analysis, which reveals three types of errors found in the explanation generation process of EPT-X.",
"The model performance analysis shows that EPT-X generates equations with 69.59% accuracy on the PEN dataset, despite being a two-phase model.",
"Table 3 shows that adding the explanation generation functionality decreases the accuracy by approximately 5%, compared to state-of-the-art model EPT 6 .",
"We suspect that this performance 6 The results on the whole dataset is reported in this section, whereas results on each subset are reported in Appendix D. 4448 BLEU ROUGE CIDEr BLEURT Dev: Human 57.16 78.66 343.0 71.44 P1-Only 60.26 78.02 346.7 69.11 EPT-X 60.07 77.99 347.1 69.61 Test: Human 55.69 78.28 347.3 71.51 P1-Only 59.32 77.99 342.9 69.69 EPT-X 60.49 78.34 341.5 69.59 Table 4: Plausibility of EPT-X on PEN dataset Dev.",
"drop is due to propagation of the errors in the generated explanations.",
"Regardless, the results of EPT-X are meaningful since the model automatically generates explanations of problems without too much decrease in correctness.",
"The difference of 5% is quite promising compared to that of the previous explainable model (Ling et al., 2017), about 40%, although a direct comparison is not possible due to differences in datasets.",
"EPT-X achieved plausibility scores that are comparable to humans and the P1-only model, as shown in Table 4. The differences in plausibility scores between EPT-X and the other two baselines range between 1 to 2 points.",
"This result indicates that EPT-X can select proper words to generate an explanation.",
"In fact, BLEU-4 score of 60 is promising compared to Ling et al. (2017) (27.2).",
"Given that EPT-X achieved human-level performance in terms of plausibility, but not for correctness, we explored the faithfulness metric to examine additional causes for the low model performance.",
"The results of the faithfulness test showed two characteristics of the explanation output of EPT-X.",
"In terms of comprehensiveness, the generated explanation contains some information required to generate a solution equation, as evidenced by Table 2. Here, we observe that EPT-X passed the comprehensiveness test for the 99% confidence Dev.",
"level.",
"That is, compared to using both inputs in Phase 2, forcing EPT-X to use only the original problem input made EPT-X generate a different solution equation.",
"This result suggests that the generated explanation provides information, which is not provided by the original problem but contributes to generating a solution equation.",
"Meanwhile, in terms of sufficiency, the generated explanation may not provide sufficient numeric information to generate a solution equation.",
"Table 2 shows that EPT-X failed the sufficiency test under the confidence level of 95%.",
"That is, EPT-X generates different solution equations when it only receives the generated explanation as input in Phase 2. So, the explanation generated in Phase 1 does not contain sufficient information, which is contained in the original problem, to generate a solution equation.",
"As the generated explanation fails to capture some information from the original problem, the correctness may change when we control the faithfulness of a model.",
"Specifically, the control test shows that there is a trade-off between faithfulness and correctness; as faithfulness increases, the correctness decreases.",
"Table 6 shows that the most faithful model EPT-XF achieves the lowest correctness score, which is 6% lower than EPT-X.",
"Conversely, the most unfaithful model EPT-XU achieves the highest correctness score, which is 4% greater than EPT-X.",
"Thus, the results of the faithfulness test and the faithfulness control test imply that in order to achieve a higher correctness score, we should verify whether the explanation contains sufficient\" information to build a correct equation. 6.2.4 Error propagation analysis The error propagation test shows that the generated explanation does not contain sufficient information to build a correct equation for some problems. Table 7 shows that both EPT-X and EPT-XF can outperform the EPT model by 8% when using a gold standard explanation as an input. However, 4449 Generated Gold Change Dev.: EPT-XF 66.03 83.29 +17.26 EPT-X 72.88 86.03 +13.15 Test: EPT-XF 62.19 85.20 +23.01 EPT-X 69.59 85.21 +15.62 Table 7: Result of error propagation test of explanation using explanations generated by the EPT-X model may decrease the correctness by more than 15%. That is, more than 15% of errors are due to information loss in Phase 1. 6.3 Qualitative output analysis Quality of explanations: The qualitative analysis showed that the quality of the generated explanations could be improved given that information required for solving word problems is missing. When we manually labeled the generated explanations for plausibility, 167 of 365 problems (45.8%) were labeled as plausible. Thus, the majority of the explanations are insufficient or contain incorrect information to generate a correct equation. Similarly, when we manually labeled the generated explanations for faithfulness, 201 of 365 problems (55.1%) were labeled as faithful. That is, when the same yet insufficient explanations were used to generate equations, the equations generated by EPT-X and humans were different. Three categories of errors: Additional qualitative analysis found three possible causes for the EPT-X errors. Here, we will briefly discuss the causes, and the detailed examples are illustrated in Appendix E. First, when a problem mentions several entities with similar properties (e.g., Heather's weight and Emily's weight), the difference between entities is ignored in the encoder (69 of 118 incorrect problems; 58.5%). This error implies that the context window used in Step 1-3 may not be big enough to distinguish two different entities. Second, if a problem provides multiple situations related to an entity (e.g., outward trip versus return trip), assigning a corresponding number to the correct situation fails in the encoding process. Detailed explanations of situations of a word problem were often omitted in the encoder (57 of 118 problems; 48.3%). Third, when a problem contains some irrelevant numbers, which are not used in solving the problem (e.g., year), sometimes an explanation for an irrelevant number was generated instead of the relevant one in the encoding process (32 of 118 problems; 27.1%). The second and third error types imply that sharing the encoder in Phases 1 and 2 might have caused confusion. The goal of the encoder in Phase 1 was to provide a detailed explanation of a given number, whereas the goal in Phase 2 was to build an equation, which involves ignoring some details to build an abstraction in the form of an equation. 7 Conclusion This study proposed a novel neural model EPT-X, Expression Pointer Transformer with Explanations, which generates explanations along with solution equations. The EPT-X model was designed to address two criteria of plausibility and faithfulness when generating an explanation. To address plausibility, the model generates explanations for each number/variable in the solution equation separately. And to address faithfulness, the model produces equations based on the information in the generated explanation. In addition to EPT-X, we release a new dataset, Problem with Explanations for Numbers (PEN), which extends existing three algebraic word problem datasets by augmenting explanations for numbers/variables. Using the PEN dataset, we conducted three analyses. The model performance analysis revealed that EPT-X could produce a correct equation with 69.59% accuracy. The quantitative error analysis showed that the EPT-X model could produce a plausible albeit insufficient explanation. Lastly, the qualitative output analysis identified three categories of errors made when generating explanations. Despite the insufficiency of explanations generated by the EPT-X model, our work is significant in that we demonstrated the possibility of generating explanations while solving an algebraic word problem. For future work, we plan to improve the correctness and faithfulness of EPT-X to enhance the existing state-of-the-art model. Acknowledgements This work was supported by the National Research Foundation of Korea (NRF) grant (No. 2020R1C1C1010162) and the Institute for Information & communications Technology Promotion (IITP) grant (No. 2021-0-02146), both funded by the Korean government (MSIT). Also, we thank S. Oh, Y. Lee, J. An, J. Kim, and H. Rhim who helped in building the PEN dataset. 4450 References Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Ha-jishirzi. 2019. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 23572367, Minneapolis, Minnesota. Association for Computational Linguistics. Thomas P Carpenter, Elizabeth Fennema, and Megan L Franke. 1996. Cognitively guided instruction: A knowledge base for reform in primary mathematics instruction. The elementary school journal. , 97(1):320. Ting-Rui Chiang and Yun-Nung Chen. 2019. Semantically-aligned equation generation for solving and reasoning math word problems. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 26562668, Minneapolis, Minnesota. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pretraining text encoders as discriminators rather than generators. In International Conference on Learning Representations . John H Conway and G. Polya. 1985. How to solve it , volume 85. Princeton university press. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 44434458, Online. Association for Computational Linguistics. Christine Fellbaum. 1998. WordNet: an electronic lexical database . MIT Press. Joseph Calvin Gagnon and Paula Maccini. 2001. Preparing students with disabilities for algebra. TEACHING Exceptional Children , 34(1):815. Danqing Huang, Jing Liu, Chin-Yew Lin, and Jian Yin. 2018. Neural math word problem solver with reinforcement learning. In Proceedings of the 27th International Conference on Computational Linguistics , pages 213223, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 41984205, Online. Association for Computational Linguistics. Asha K. Jitendra, Edward Sczesniak, Cynthia C. Grif-fin, and Andria Deatline-Buchman. 2007. Mathematical word problem solving in third-grade classrooms. The Journal of Educational Research , 100(5):283302. Asha K. Jitendra and Jon R. Star. 2012. An exploratory study contrasting highand low-achieving students' percent word problem solving. Learning and Individual Differences , 22(1):151158. Nitish Shirish Keskar, Bryan McCann, Lav R. Varsh-ney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. Kyung Seo Ki, Donggeon Lee, Bugeun Kim, and Gahgene Gweon. 2020. Generating equation by utilizing operators : GEO model. In Proceedings of the 28th International Conference on Computational Linguistics , pages 426436, Barcelona, Spain (On-line). International Committee on Computational Linguistics. Bugeun Kim, Kyung Seo Ki, Donggeon Lee, and Gahgene Gweon. 2020. Point to the Expression: Solving Algebraic Word Problems using the Expression-Pointer Transformer Model. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 37683779, Online. Association for Computational Linguistics. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing , pages 388395, Barcelona, Spain. Association for Computational Linguistics. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics , 3:585597. Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. MAWPS: A math word problem repository. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 11521157, San Diego, California. Association for Computational Linguistics. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 271281, Baltimore, Maryland. Association for Computational Linguistics. D. Lee and G. Gweon. 2020. Solving arithmetic word problems with a templatebased multi-task deep neural network (t-mtdnn). In 2020 IEEE International 4451 Conference on Big Data and Smart Computing (Big-Comp) , pages 271274. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out , pages 7481, Barcelona, Spain. Association for Computational Linguistics. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 158167, Vancouver, Canada. Association for Computational Linguistics. Richard E. Mayer. 1981. Frequency norms and structural analysis of algebra story problems into families, categories, and templates. Instructional Science , 10(2):135175. Marjorie Montague. 2008. Self-regulation strategies to improve mathematical problem solving for students with learning disabilities. Learning Disability Quarterly , 31(1):3744. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics , pages 311318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research , 21(140):167. Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging pre-trained checkpoints for sequence generation tasks. Transactions of the Association for Computational Linguistics , 8:264280. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing , pages 17431752, Lisbon, Portugal. Association for Computational Linguistics. Subhro Roy and Dan Roth. 2017. Unit dependency graph and its application to arithmetic word problem solving. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence , AAAI'17, page 30823088. AAAI Press. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1073 1083, Vancouver, Canada. Association for Computational Linguistics. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 78817892, Online. Association for Computational Linguistics. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . Shyam Upadhyay and Ming-Wei Chang. 2017. Annotating derivations: A new evaluation strategy and dataset for algebra word problems. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers , pages 494504, Valencia, Spain. Association for Computational Linguistics. Shyam Upadhyay, Ming-Wei Chang, Kai-Wei Chang, and Wen-tau Yih. 2016. Learning from explicit and implicit supervision jointly for algebra word problems. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing , pages 297306, Austin, Texas. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems , pages 59986008. R. Vedantam, C. L. Zitnick, and D. Parikh. 2015. Cider: Consensus-based image description evaluation. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 45664575. Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , pages 845854, Copenhagen, Denmark. Association for Computational Linguistics. Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. 2020. Large batch optimization for deep learning: Training bert in 76 minutes. In International Conference on Learning Representations . Kaizhong Zhang and Dennis Shasha. 1989. Simple fast algorithms for the editing distance between trees and related problems. SIAM Journal on Computing , 18(6):12451262. Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015. Learn to solve algebra word problems using quadratic programming. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing , pages 817822, Lisbon, Portugal. Association for Computational Linguistics. 4452 Figure 3: A screenshot of the system used for annotating explanations on the PEN dataset A Annotating explanations for PEN dataset This section describes the detailed process of annotating explanations. Using a web-based system shown in Figure 3, coders inputted a natural language explanation for each number/variable. To provide situational information for each num-ber/variable, we highlighted text snippets and equations related to the target number/variable. Based on the given information, the coder needed to complete the following sentence: number means ...\""
] | [
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Chinese spelling correction (CSC) is a task to detect and correct spelling errors in texts.",
"CSC is essentially a linguistic problem, thus the ability of language understanding is crucial to this task.",
"In this paper, we propose a P re-trained masked L anguage m O del with M isspelled knowledg E (PLOME) for CSC, which jointly learns how to understand language and correct spelling errors.",
"To this end, PLOME masks the chosen tokens with similar characters according to a confusion set rather than the fixed token [MASK] as in BERT.",
"Besides character prediction, PLOME also introduces pronunciation prediction to learn the misspelled knowledge on phonic level.",
"Moreover, phonological and visual similarity knowledge is important to this task.",
"PLOME utilizes GRU networks to model such knowledge based on characters' phonics and strokes.",
"Experiments are conducted on widely used benchmarks.",
"Our method achieves su-perior performance against state-of-the-art approaches by a remarkable margin.",
"We release the source code and pre-trained model for further use by the community 1 .",
"Chinese spelling correction (CSC) aims to detect and correct spelling errors in texts (Yu and Li, 2014).",
"It is a challenging yet important task in natural language processing, which plays an important role in various NLP applications such as search engine (Martins and Silva, 2004) and optical character recognition (Afli et al., 2016).",
"In Chinese, spelling errors can be mainly divided into two types: phonological errors and visual errors, which are separately caused by the misuse of phonologically similar characters and visually similar characters.",
"According to Liu et al. (2010), about 83% 1 https://github.com/liushulinle/PLOME Figure 1: Examples of Chinese spelling errors.",
"of errors are phonological and 48% are visual.",
"Figure 1 illustrates examples of such errors.",
"The first case is caused by the misuse of (gone) and (beautiful) with the same phonics, and the second case is caused by the misuse of (human) and (enter) with very similar shape.",
"Chinese spelling correction is a challenging task because it requires human-level language understanding ability to completely solve this problem (Zhang et al., 2020).",
"Therefore, language model plays an important role in CSC.",
"In fact, one of the mainstream solutions to this task is based on language models (Chen et al., 2013; Yu and Li, 2014; Tseng et al., 2015).",
"Currently, the latest approaches (Zhang et al., 2020; Cheng et al., 2020) are based on BERT (Devlin et al., 2019), which is a masked language model.",
"In these approaches, (masked) language models are independently pre-trained from the CSC task.",
"As a consequence, they did not learn any task-specific knowledge during pre-training.",
"Therefore, language models in these approaches are sub-optimal for CSC.",
"Chinese spelling errors are mainly caused by the misuse of phonologically or visually similar characters.",
"Thus, knowledge of the similarity between characters is crucial to this task.",
"Some work leveraged the confusion set, i.e. a set of similar characters, to fuse such information (Wang et al., 2018, 2019; Zhang et al., 2020).",
"However, confusion set is usually generated by heuristic rules or manual annotations, thus its coverage is limited.",
"To circumvent this problem, Hong et al. (2019) computed the similarity based on character's strokes and phonics.",
"The similarity was measured via rules rather than learned by the model, therefore such knowledge was not fully utilized.",
"In this paper, we propose PLOME, a P re-trained masked L anguage m O del with M isspelled knowledg E , for Chinese spelling correction.",
"The following characteristics make PLOME more effective than vanilla BERT for CSC.",
"First, we propose the confusion set based masking strategy, where each chosen token is randomly replaced by a similar character according to a confusion set rather than the fixed token [MASK] as in BERT.",
"Thus, PLOME jointly learns the semantics and misspelled knowledge during pre-training.",
"Second, the proposed model takes each character's strokes and phonics as input, which enables PLOME to model the similarity between arbitrary characters.",
"Third, PLOME learns the misspelled knowledge on both character and phonic level by jointly recovering the true character and phonics for masked tokens.",
"We conduct experiments on the widely used benchmark dataset SIGHAN (Wu et al., 2013; Yu et al., 2014; Tseng et al., 2015).",
"Experimental results show that PLOME significantly outperforms all the compared approaches, including the latest Soft-masked BERT (Zhang et al., 2020) and SpellGCN (Cheng et al., 2020).",
"We summarize our contributions as follows: (1) PLOME is the first task-specific language model designed for Chinese spelling correction.",
"The proposed confusion set based masking strategy enables our model to jointly learn the semantics and misspelled knowledge during pre-training.",
"(2) PLOME incorporates phonics and strokes, which enables it to model the similarity between arbitrary characters.",
"(3) PLOME is the first to model this task on both character and phonic level.",
"Chinese spelling correction is a challenging task in natural language processing, which plays important roles in many applications, such as search engine (Martins and Silva, 2004; Gao et al., 2010), automatic essay scoring (Burstein and Chodorow, 1999; Lonsdale and Strong-Krause, 2003), and optical character recognition (Afli et al., 2016; Wang et al., 2018).",
"It has been an active topic, and various approaches have been proposed in recent years (Yu and Li, 2014; Wang et al., 2018, 2019; Zhang et al., 2020; Cheng et al., 2020).",
"Early work on CSC followed the pipeline of error identification, candidate generation and selection.",
"Some researchers focused on unsupervised approaches, which typically adopted a confusion set to find correct candidates and employed language model to select the correct one (Chang, 1995; Huang et al., 2000; Chen et al., 2013; Yu and Li, 2014; Tseng et al., 2015).",
"However, these methods failed to condition the correction on the input sentence.",
"In order to model the input context, discriminative sequence tagging methods (Wang et al., 2018) and sequence-to-sequence generative models (Chollampatt et al., 2016; Ji et al., 2017; Ge et al., 2018; Wang et al., 2019) were employed.",
"BERT (Devlin et al., 2019) is a bidirectional language model based on Transformer encoder (Vaswani et al., 2017).",
"It has been demonstrated effective in a wide range of applications, such as question answering (Yang et al., 2019), information extraction (Lin et al., 2019), and semantic matching (Reimers and Gurevych, 2019).",
"Recently, it has dominated the researches on CSC (Hong et al., 2019; Zhang et al., 2020; Cheng et al., 2020).",
"Hong et al. (2019) adopted the DAE-Decoder paradigm with BERT as encoder.",
"Zhang et al. (2020) introduced a detection network to generate the masking vector for the BERT-based correction network.",
"Cheng et al. (2020) employed the graph convolution network (GCN) (Kipf and Welling, 2016) combined with BERT to model character interdependence.",
"However, BERT is designed and pre-trained independently from the CSC task, thus it is sub-optimal.",
"To improve the performance, we propose a task-specific language model for CSC.",
"We introduce PLOME and its detailed implementation in this section.",
"Figure 2 illustrates the framework of PLOME.",
"Similar to BERT (Devlin et al., 2019), the proposed model also follows the pre-training&fine-tuning paradigm.",
"In the following subsections, we first introduce the confusion set based masking strategy, then present the architecture of PLOME and the learning objectives, finally show the details of fine-tuning.",
"In order to train PLOME, we randomly mask some percentage of the input tokens and then recover them.",
"Devlin et al. (2019) replaced the chosen tokens by a fixed token [MASK], which is nonexistent in downstream tasks.",
"On the contrast, we remove this token and replace each chosen token by a random character that is similar to it.",
"Similar characters are obtained from a publicly available confusion set (Wu et al., 2013), which contains two types of similar characters: phonologically similar and visually similar.",
"Since phonological errors are two times more frequent than visual errors (Liu et al., 2010), these two types of similar characters are assigned different chance to be chosen during masking.",
"Following Devlin et al. (2019), we totally mask 15% of tokens in the corpus.",
"In addition, we use dynamic masking strategy (Liu et al., 2019), where the masking pattern is generated every time a sequence is fed into the model.",
"Always replacing chosen tokens by characters in a confusion set will cause two problems.",
"(1).",
"The model tends to make correction decision for all inputs since all the tokens to be predicted during pre-training are misspelled.",
"To circumvent this problem, some percentage of the selected tokens are unchanged.",
"(2).",
"The size of confusion set is limited, however misspelling may be caused by the misuse of an arbitrary pair of characters in real texts.",
"To improve generalization ability, we replace some percentage of chosen tokens by random characters from the vocabulary.",
"To sum up, if Sentence Original Sentence (qu)",
"the i -th token is chosen, we replace it with",
"(i) a random phonologically similar character 60% of the time",
"(ii) a random visually similar character 15% of the time",
"(iii) the unchanged i -th token 15% of the time",
"(iv) a random token in the vocabulary 10% of the time.",
"Table 1 presents examples of different masking strategies.",
"As shown in Figure 2, the final embedding of each character is the sum of character embedding, position embedding, phonic embedding and shape embedding.",
"The former two are obtained via looking up embedding tables, where the size of vocabulary and embedding dimension are the same as that in BERT base (Devlin et al., 2019).",
"Phonic Embedding In Chinese, phonics (also known as Pinyin) represents the pronunciation of a character, which is a sequence of lowercase letters Figure 3: Illustration of phonic GRU network and shape GRU network.",
"with a diacritic 2 .",
"In this paper, we use the Unihan Database 3 to obtain the character-phonics mapping (diacritic is removed).",
"To model the phonological relationship between characters, we feed the letters of each character's phonics to a 1-layer GRU (Bah-danau et al., 2014) network to generate the phonic embedding, where similar phonics are expected to have similar embeddings.",
"An example is given in the middle part in Figure 3.",
"Shape Embedding We use the Stroke Order 4 to represent the shape of a character, which is a sequence of strokes indicating the order in which the strokes of a Chinese character are written.",
"A stroke is a movement of a writing instrument on a writing surface.",
"In this paper, stroke data is obtained via Chaizi Database 5 .",
"In order to model the visual relationship between characters, the Stroke order of each character is fed into another 1-layer GRU network to generate the shape embedding.",
"An example is given in the bottom part in Figure 3.",
"The transformer encoder has the same architecture as that in BERT base (Devlin et al., 2019).",
"The number of transformer layers (Vaswani et al., 2017) is 12, the size of hidden units is 768 and the number of attention head is 12.",
"For more detailed configu-rations please refer to Devlin et al. (2019).",
"As illustrated in Figure 2, our model makes two predictions for each chosen character.",
"Character Prediction Similar to BERT, PLOME predicts the original character for each 2 https://en.wikipedia.org/wiki/Pinyin 3 http://www.unicode.org/charts/unihan.html 4 https://en.wikipedia.org/wiki/Stroke order 5 https://github.com/kfcd/chaizi masked token based on the embedding generated by the last transformer layer.",
"The probability of the character predicted for the i -th token in a given sentence is defined as: p c ( y i = j | X ) = softmax ( W c h i + b c )[ j ] (1) where p c ( y i = j | X ) is the conditional probability that the true character of the i -th token x i is predicted as the j -th character in vocabulary, h i denotes the embedding output from the last transformer layer for x i , W c R n c 768 and b c R n c are parameters for character prediction, n c is the size of the vocabulary.",
"Pronunciation Prediction Chinese totally has about 430 different pronunciations (represented by phonics) but has more than 2,500 common used characters.",
"Thus, many characters share the same pronunciation.",
"Moreover, some pronunciations are so similar that it is easy to be misused, such as jing and jin.",
"Therefore, phonological error dominates Chinese spelling errors.",
"In practice, about 80% of spelling errors are phonological (Zhang et al., 2020).",
"In order to learn the misspelled knowledge on phonic level, PLOME also predicts the true pronunciation for each masked token, where pronunciation is presented by phonics without diacritic.",
"The probability of pronunciation prediction is defined as: p p ( g i = k | X ) = softmax ( W p h i + b p )[ k ] (2) where p p ( g i = k | X ) is the conditional probability that the correct pronunciation of the masked character x i is predicted as the k -th phonics in the phonic vocabulary, h i denotes the embedding output from the last transformer layer for x i , W c R n p 768 and b p R n p are parameters for pronunciation prediction, n p is the size of the phonic vocabulary.",
"The learning process is driven by optimizing two objectives, corresponding to character prediction and pronunciation prediction, respectively.",
"L c = n (cid:88) i =1 log p c ( y i = l i | X ) (3) L p = n (cid:88) i =1 log p p ( g i = r i | X ) (4) where L c is the objective for character prediction, l i is the true character for x i , L p is the objective for pronunciation prediction, r i is the true pronunciation.",
"The overall objective is defined as: L = L c + L p (5) 3.6 Fine-tuning Procedure Above subsections present the details of the pretraining procedure.",
"In this subsection, we introduce the fine-tuning procedure.",
"PLOME is designed for the CSC task, which aims to detect and correct spelling errors in Chinese texts.",
"Formally, given a character sequence X = { x 1 , x 2 , ..., x n } consisting of n characters, the model is expected to generate a target sequence Y = { y 1 , y 2 , ..., y n } , where errors are corrected.",
"Training The learning objective is exactly the same as that in the pre-training procedure(see Section 3.5).",
"This procedure is similar to pre-training except that: (1).",
"the masking operation introduced in Section 3.1 is eliminated.",
"(2).",
"all input characters require to be predicted rather than only chosen tokens as in pre-training.",
"Inference As illustrated in Section 3.4, PLOME predicts both the character distribution and pronunciation distribution for each masked token.",
"We define the joint distribution as: p j ( y i = j | X ) = p c ( y i = j | X ) p p ( g i = j p | X ) (6) where p j ( y i = j | X ) is the probability that the original character of x i is predicted as the j -th character jointly considering the character and pronunciation predictions, p c and p p are separately defined in Equation 1 and Equation 2, j p is the pronunciation of the j -th character.",
"To this end, we construct an indicator matrix I R n c n p , where I i,j is set to 1 if the pronunciation of the i -th character is the j -th phonics, otherwise set to",
"0. Then the joint distribution can be computed by: p j ( y i | X ) = [ p p ( g i | X ) IT ] (cid:12) p c ( y i | X ) (7) where (cid:12) is the element-wise production.",
"We use the joint probability as the predicted distribution.",
"For each input token, the character with the highest joint probability is selected as the final output: (cid:98) y i = argmax p j ( y i | X ) .",
"The joint distribution simultaneously takes the character and pronunciation predictions into consideration, thus is more accurate.",
"We will verify it in Section 4.5.",
"In this section, we present the details for pretraining PLOME and the fine-tuning results on the most widely used benchmark dataset.",
"Dataset We use wiki2019zh 6 as the pre-training corpus, which consists of one million Chinese Wikipedia 7 pages.",
"Moreover, we also collect three million news articles from a Chinese news platform.",
"We split those pages and articles into sentences and totally obtain 162.1 million sentences.",
"Then we concatenate consecutive sentences to obtain text fragments with at most 510 characters, which are used as the training instances.",
"Parameter Settings We denote the dimension of character embeddings, letter (in phonics) embeddings and stroke embeddings as d c , d l , d s , respectively, the dimension of hidden states in phonic and shape GRU networks as h p , and h s .",
"Then we have d c = 768 , d l = d s = 32 , h p = h s = 768 .",
"The configuration of transformer encoder is exactly the same as that in BERT base (Devlin et al., 2019), and the learning rate is set to 5e-5.",
"These parameters are set based on experience because of the large cost of pre-training.",
"Better performance could be achieved if parameter tuning technique (e.g. grid search) is employed.",
"Moreover, instead of training PLOME from scratch, we adopt the parameters of Chinese BERT released by Google 8 to initialize the Transformer blocks.",
"Training Data Following Cheng et al. (2020), the training data is composed of 10K manually annotated samples from SIGHAN (Wu et al., 2013; Yu et al., 2014; Tseng et al., 2015) and 271K automatically generated samples from Wang et al. (2018).",
"Evaluation Data We use the latest SIGHAN test dataset (Tseng et al., 2015) as in Zhang et al. (2020) to evaluate the proposed model, which contains 1100 texts and 461 types of errors.",
"Evaluation Metrics Following previous work (Cheng et al., 2020; Zhang et al., 2020), we use the 6 https://github.com/suzhoushr/nlp chinese corpus 7 https://zh.wikipedia.org/wiki/ 8 https://github.com/google-research/bert Category Method Character-level (%) Sentence-level (%) Detection-level Correction-level Detection-level Correction-level P R F P R F P R F P R F SOTA Hybrid (Wang et al., 2018) 54.0 69.3 60.7 -52.1 ----PN (Wang et al., 2019) 66.8 73.1 69.8 71.5 59.5 69.9 ---FASPell (Hong et al., 2019) ---67.6 60.0 63.5 66.6 59.1 62.6 SKBERT (Zhang et al., 2020) ---73.7 73.2 73.5 66.7 66.2 66.4 SpellGCN (Cheng et al., 2020) 88.9 87.7 88.3 95.7 83.9 89.4 74.8 80.7 77.7 72.1 77.7 75.9 Pretrain cBERT-Pretrain 64.2 83.2 72.5 85.6 71.2 77.7 37.9 49.5 42.9 32.1 42.0 36.4 PLOME-Pretrain 68.1 74.2 71.0 83.2 61.7 70.9 41.8 47.5 44.5 34.2 38.9 36.4 Finetune BERT-Finetune 90.9 84.9 87.8 95.6 81.2 87.8 68.4 77.6 72.7 66.0 74.9 70.2 cBERT-Finetune 92.4 87.7 90.0 96.2 84.4 89.9 75.3 78.9 77.1 72.7 76.1 74.4 PLOME-Finetune 94.5 87.4 90.8 97.2 84.3 90.3 77.4 81.5 79.4 75.3 79.3 77.2 Table 2: The performance of our approach and baseline models.",
"precision, recall and F1 scores as the evaluation metrics.",
"Besides character-level evaluation, we also report sentence-level metrics on the detection and correction sub-tasks.",
"We evaluate these metrics using the script from Cheng et al. (2020) 9 .",
"Parameter Settings Following Cheng et al. (2020), we set the maximum sentence length to 180, batch size to 32 and the learning rate to 5e-5.",
"All experiments are conducted for 4 runs and the averaged metric is reported.",
"The code and trained models will be released (currently the code is attached in the supplementary files).",
"We use the following methods for comparison.",
"Hybird (Wang et al., 2018) uses a BiLSTM-based model trained on an automatically generated dataset.",
"PN (Wang et al., 2019) is a Seq2Seq model incorporating a pointer network.",
"FASPell (Hong et al., 2019) adopts the DAE-Decoder paradigm and employs BERT as the de-noising auto-encoder.",
"SKBERT (Zhang et al., 2020) introduces the S oft-mas K ing strategy in BERT to improve the performance of error detection.",
"SpellGCN (Cheng et al., 2020) combines a GCN network with BERT to model the relationship between characters in the given confusion set.",
"(De-9 https://github.com/ACL2020SpellGCN/SpellGCN",
"vlin et al., 2019).",
"The output layer is similar to PLOME , but only has the character prediction as defined in Equation",
"1. cBERT is also pre-trained via the confusion set based masking strategy.",
"Table 2 illustrates the performance of the proposed method and baseline models.",
"The results of recently proposed models are presented in the first group.",
"The results of pre-trained and fine-tuned models are presented in the second and third group, respectively.",
"From this table, we observe that: 1) Without fine tuning, pre-trained models in the middle group achieve relatively good results, even outperform the supervised approach PN with remarkable gains.",
"This indicates that the confusion set based masking strategy enables our model to learn task-specific knowledge during pre-training.",
"2) Compared the fine-tuned models, cBERT outperforms BERT on all metrics.",
"Especially, the F score of sentence-level evaluations are improved by more than 4 absolute points.",
"The improvement is remarkable with such a large amount of training data (281k texts), which indicates that the proposed masking strategy provides essential knowledge and it can not be learned from fine tuning.",
"3) With the incorporation of phonic and shape embeddings, PLOME-Finetune outperforms cBERT-Finetune by 2.3% and 2.8% absolute improvements in sentence-level detection and correction.",
"This indicates that characters' phonics and strokes provide useful information and it can hardly be learned from the confusion set.",
"4) SpellGCN and our approach use the same con-Method Character-level on Whole Set Sentence-level via Official Tool Detection-level Correction-level Detection-level Correction-level P R F P R F FPR A P R F A P R F SpellGCN 77.7 85.6 81.4 96.9 82.9 89.4 13.2 83.7 85.9 80.6 83.1 82.2 85.4 77.6 81.3 BERT-Finetune 76.2 83.1 79.5 96.5 80.3 87.6 14.7 81.7 85.2 76.0 80.3 80.3 84.7 73.5 78.7 cBERT-Finetune 83.0 87.8 85.3 96.0 83.9 89.5 10.6 84.5 88.1 79.6 83.6 82.9 87.6 76.3 81.5 PLOME-Finetune 85.2 86.8 86.0 97.2 85.0 90.7 10.9 85.0 87.9 80.9 84.3 83.7 87.6 78.3 82.7 Table 3: Experimental results evaluated on the whole test set.",
"fusion set from Wu et al. (2013), but adopt different strategies to learn the knowledge contained in it.",
"SpellGCN built a GCN network to model this information, whereas PLOME learned it from huge scale data during pre-training.",
"PLOME achieves better performance on all metrics, indicating that our approach is more effective to model such knowledge.",
"Previous work (Wang et al., 2019; Cheng et al., 2020) conducted the character-level evaluation on positive sentences which contain at least one error (sentence-level metrics were evaluated on the whole test set).",
"Thus, the precision score is very high.",
"The character-level results in table 2 are also evaluated in such manner for fair comparison.",
"To make more comprehensive evaluation, we report the results evaluated on the whole test set in table 3.",
"Moreover, following Cheng et al. (2020), we also report the sentence-level results evaluated by SIGHAN official tool.",
"We observe that PLOME consistently outperforms BERT and SpellGCN on all metrics.",
"To make more comprehensive comparisons, we also evaluate the proposed model on SIGHAN13(Wu et al., 2013) and SIGHAN14(Yu et al., 2014).",
"Following Cheng et al. (2020), we performed 6 additional fine-tuning epochs on SIGHAN13 as its data distribution differs from other datasets.",
"Table5 illustrates the results, from which we observe that PLOME consistently outperforms all the compared models.",
"As illustrated in Section 3.4 and 3.6, PLOME predicts three distributions for each character: the character distribution p c , the pronunciation distribution p p and the joint distribution p j .",
"The latter two distributions are related to pronunciation prediction, which is first to be introduced in this work.",
"In this subsection, we investigate the performance of PLOME with each of them as the final output.",
"The CSC task requires character prediction, thus we only compare the effects of the character prediction p c and the joint prediction p j .",
"Table 4 presents the experimental results, from which we observe that the joint distribution outperforms the character distribution on all evaluation metrics.",
"Especially, the gap of precision scores is more obvious.",
"The joint distribution simultaneously takes the character and pronunciation predic-Method Character-level Sentence-level Detection-level Correction-level Detection-level Correction-level P R F P R F P R F P R F cBERT-Rand 81.8 86.2 83.9 96.3 83.0 89.2 73.7 77.0 75.3 70.0 73.9 71.9 cBERT-BERT 83.0 87.8 85.3 96.0 83.9 89.5 75.3 78.9 77.1 72.7 76.1 74.4 PLOME-Rand 83.4 86.6 84.9 96.8 83.9 89.9 75.9 80.7 78.2 73.6 78.3 75.9 PLOME-BERT 85.2 86.8 86.0 97.2 85.0 90.7 77.4 81.5 79.4 75.3 79.3 77.2 Table 6: The performance of cBERT and PLOME with different initialization strategies.",
"Generally speaking, initialization strategy has a great influence on the performance for deep models.",
"In this subsection, we investigate the effects of different initialization strategies in the pre-training procedure.",
"For comparison, we implement four baselines based on cBERT and PLOME .",
"Table 6 illustrates the results, where methods named with *-Rand initialize all the parameters randomly and methods named with *-BERT initialize the transformer encoder by BERT released by Google.",
"From the table we observe that both cBERT and PLOME initialized with BERT achieve better performance.",
"Especially, the recall score improves significantly for all evaluations.",
"We believe the following two reasons may explain this phenomenon.",
"1) The rich semantic information in BERT can effectively improves the generalization ability.",
"2) PLOME is composed of two 1-layer GRU networks and a 12-layer transformer encoder, and totally contains more than 110M parameters.",
"It is easily trapped into local optimization when training such a large-scale model from scratch.",
"In this subsection, we investigate whether the phonic and shape GRU networks learned meaningful representations for characters.",
"To this end, we generate the phonic and shape embeddings for each character by the GRU networks in Figure 2 and then visualize them.",
"Figure 4 illustrates 30 characters nearest to ' according to the cosine similarity of the 768-dim embeddings generated by GRU networks, which is visualized via t-SNE (Maaten and Hinton, 2008).",
"On one hand, nearly all the characters similar to ', such as ' and ', are included in this Figure 4: The visualization of shape embeddings.",
"figure.",
"On the other hand, similar characters are very close to each other (labeled by circles).",
"These phenomena indicate that the learned shape embedding well models the shape similarity.",
"Figure 5 shows the same situation for the phonic embedding related to ding' and also demonstrates its ability in modeling phonic similarity.",
"In this subsection, we investigate the converging speed of various models in the fine-tuning procedure.",
"Figure 6 shows the test curves for character-level detection metrics of BERT , cBERT and PLOME .",
"Thanks to the confusion set based masking strategy, cBERT and PLOME learned task-specific knowledge in the pre-training procedure, therefore they achieve much better performance than BERT at the beginning of the training.",
"As the training went on, the gap gradually narrowed dur-Figure 6: The test curves for character-level detection metrics of various models in the fine-tuning procedure.",
"ing the first 35,000 steps and then remained stable with a gap of 6%(86% vs. 80%).",
"In addition, the proposed model needs much less training steps to achieve a relatively good performance.",
"PLOME needs only 7k steps to achieve the score of 80%, whereas BERT needs 47k steps.",
"We propose PLOME, a pre-trained masked language model with misspelled knowledge for CSC.",
"To the best of our knowledge, PLOME is the first task-specific language model for CSC, which jointly learns semantics and misspelled knowledge thanks to the confusion set based masking strategy.",
"Previous work demonstrated that phonological and visual similarity between characters is essential to this task.",
"We introduce phonic and shape GRU networks to model such features.",
"Moreover, PLOME is also the first model that makes decision via jointly considering the target pronunciation and character distributions.",
"Experimental results showed that PLOME outperforms all the compared models with remarkable gains.",
"We thank Lei He, Suncong Zheng and Weikang Wang for helpful discussions, and anonymous reviewers for their insightful comments."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Estimating the expected output quality of generation systems is central to NLG.",
"This paper qualifies the notion that automatic metrics are not as good as humans in estimating system-level quality.",
"Statistically, humans are unbiased, high variance estimators, while metrics are biased, low variance estimators.",
"We compare these estimators by their error in pairwise prediction (which generation system is better?) using the bootstrap.",
"Measuring this error is complicated: predictions are evaluated against noisy, human predicted labels instead of the ground truth, and metric predictions fluctuate based on the test sets they were calculated on.",
"By applying a bias-variance-noise decomposition, we adjust this error to a noise-free, infinite test set setting.",
"Our analysis compares the adjusted error of metrics to humans and a derived, perfect segment-level annotator, both of which are unbiased estimators dependent on the number of judgments collected.",
"In MT, we identify two settings where metrics outperform humans due to a statistical advantage in variance: when the number of human judgments used is small, and when the quality difference between compared systems is small.",
"1 1 Introduction Automatic metrics are involved in many developmental settings for natural language generation (NLG) systems.",
"In machine translation (MT), metrics like BLEU (Papineni et al., 2002) enable settings where the amount of human effort required would be infeasible, such as architecture or hyper-parameter search (Britz et al., 2017).",
"As objective, reproducible quantities, BLEU scores facilitate cross-paper comparisons (Post, 2018).",
"Historically, progress in MT has been attributed to its 1 The data and code to reproduce our analyses are available at https://github.com/johntzwei/ metric-statistical-advantage .",
"use (Callison-Burch et al., 2006).",
"Metrics are an active research area in many NLG subfields, including summarization (Lin, 2004), dialogue (Tao et al., 2018), and image captioning (Anderson et al., 2016), which seek to realize the goal of quick and reliable automatic evaluation.",
"In all these subfields, the primary goal when conducting evaluation is typically to compare NLG systems.",
"Both human annotators and automatic metrics produce segment-level scores, i.e., scores for individual examples, so comparing systems requires aggregating segment-level scores into an overall system-level score for each system.",
"Ideally, we would compare systems by their expected human annotator score (an average over infinite human judgments), which we term the true system quality.",
"In practice, we can only estimate this expectation with a sample mean over a finite number of human judgments.",
"Metrics offer a cheaper alternative: we can instead compare systems by their aggregate metric scores on a number of system outputs.",
"When comparing systems, we care primarily about how well we estimate the difference of their true system qualities, and in particular the sign of this difference (i.e., which system is better), which we term the true pairwise label.",
"There is a gap in our understanding of system-level metrics.",
"To recount a perplexing anecdote, in the most recent edition of the WMT metrics shared task (Mathur et al., 2020b), initial human evaluation disagreed with most metrics on a pairwise prediction of two translation systems.",
"In a manual re-evaluation, the second round results favored the metrics.",
"Our paper offers a statistical explanation for how humans could go wrong: even if human estimation for the difference in system quality is unbiased, it has high variance.",
"On the other hand, while estimators based on metrics are biased, they have low variance.",
"It is therefore possible for metrics to give a more accurate pairwise prediction than humans when the bias is small (see illustration in Figure 1).",
"Our paper explores this distinction in the following three questions: (1) How can we evaluate system-level metrics?",
"When observing estimator error in terms of pairwise predictions, predictions are evaluated against noisy, human predicted labels rather than the ground truth.",
"In addition, metric predictions fluctuate based on the sample of outputs from the generation system.",
"To disentangle these properties, we examine observed estimator error under a bias-variance-noise decomposition.",
"Under simulation, we find that the label noise and metric variance account for a small fraction of observed error in both MT and summarization.",
"(2) How good are these metrics?",
"We compare the errors of metric estimators computed on an infinite number of system outputs, against human estimators with varying amounts of human judgment.",
"We also derive the error of a perfect segment-level annotator (i.e. they provide noiseless/expected human scores for each output), which is also unbiased and judgment dependent.",
"Empirically, some MT metrics exceed the performance of unbiased estimators with a small number of judgments.",
"(3) What are the limits of system-level evaluation?",
"The perfect segment-level annotator, as the noiseless human, provides an optimistic estimate for the number of human judgments necessary to achieve a fixed performance.",
"With a power analysis, we can analytically calculate the number of judgments necessary to detect differences between systems of varying sizes.",
"When differences in system quality are small, a prohibitively large number of perfect annotator judgments are required to give a correct pairwise prediction.",
"We will now formalize scoring at the system level, adopting notation from Chaganty et al. (2018).",
"Let X be a distribution over inputs (e.g. source sen-tences), and S be a set of systems (e.g. all translation systems in WMT).",
"Each system S S takes input x X and returns output z = S ( x ) (e.g. z is a translation).",
"Let H ( z ) be a random variable representing a human judgment according to some evaluation prompt (e.g. translation adequacy, from 0-100).",
"A central quantity of interest is the quality of system S , defined as HS = E x X [ H ( S ( x ))] (1) and is not directly observable as it requires infinite human judgment.",
"We can estimate (1) with a finite test set of n examples.",
"Let x (1) , . . . , x ( n ) i.i.d. X be a sampled test set and z (1) , . . . , z ( n ) be the set of outputs where each z ( i ) = S ( x ( i ) ) .",
"Human judgments are sampled independently as y ( i ) H ( z ( i ) ) .",
"The sample mean (cid:99) HS = 1 n n (cid:88) i =1 y ( i ) (2) is an unbiased estimator of (1).",
"A cheaper alternative to estimating the true quality scores is with an estimator based on an automatic metric.",
"Let M (e.g. BERTSCORE ) be an automatic metric that takes as input any number of outputs from a system S and produces score (cid:100) MS = M ( z (1) , . . . , z ( n ) ) (3) where (cid:100) MS is a biased estimator of HS .",
"As the test set is sampled, the metric score has non-zero variance.",
"Note that while we use the greek letter , only some system-level metrics (e.g. ROUGE) are averages of their segment-level counterparts (their score decomposes to (cid:100) MS = 1 n (cid:80) ni =1 M ( z ( i ) ) ).",
"Empirically, we find that metrics using other aggregation strategies have convergent properties similar to an average (see Appendix B).",
"We sidestep this by defining the true metric score as MS = M ( z (1) , . . . , z ( m ) ) (4) for test sets of size m sufficiently large so that this true score is nearly constant.",
"Research in system-level metrics have a tradition of evaluating metric correlation to human judgment with the Pearson correlation coefficient (Re-iter, 2018).",
"Formally, these evaluations compare (cid:99) r M = Corr S ( (cid:99) HS , (cid:100) MS ) for different metrics M .",
"Recently, Mathur et al. (2020a) highlights two issues with the use of correlation: First, Pearson's r is neither interpretable nor reflective of system-level metric use in practice.",
"Second, outlier systems (systems with very high/low human/metric scores) can arbitrarily inflate Pearson's r , and outlier systems often exist.",
"Mathur et al. (2020a) propose evaluating metric accuracy in pairwise prediction (can the metric differentiate which generation system is better?) as an alternative that mitigates the issues mentioned above.",
"We add two points that apply to any measure of metric performance, correlation or pairwise predictions: First, metrics cannot be perfect due to noise in human labels.",
"For instance, while r ranges from [ 1 , 1] , even for the metric that predicts HS it has Corr S ( (cid:99) HS , HS ) < 1 due to noise in (cid:99) HS .",
"It is unclear what is the true upper bound of performance we can expect to achieve.",
"Second, direct measurement of any performance measure on our datasets introduces sample bias (Engstrom et al., 2020).",
"For correlation, (cid:99) r M could be high because (cid:99) HS and (cid:100) MS happened to align for this data collection, but a repeat experiment could yield different results.",
"A more holistic view is to give an estimate of average case performance.",
"2 The evaluation methodology we derive in 4 addresses the latter points we raise for pairwise predictions and mean squared error (which has direct relationship to the correlation).",
"However, we also believe that pairwise predictions is a step in the right direction, and our discussion continues with pairwise predictions.",
"We will now formalize pairwise predictions.",
"For systems S, S (cid:48) S , define the true difference in their system scores as HS,S (cid:48) = HS HS (cid:48) (5) 2 Pearson's r was not formulated for individual distributions (cid:99) HS and (cid:100) MS for each datapoint, so applying the William's test (Graham and Baldwin, 2014) also falls short here.",
"and likewise for the differences MS,S (cid:48) and (cid:100) MS,S (cid:48) w.r.t. to a metric M .",
"In practice, we are interested in the pairwise prediction of S and S (cid:48) i.e. whether HS,S (cid:48) ?",
"> 0 , given that we have collected human judgments (we observe (cid:100) HS,S (cid:48) 0 ), or computed metric scores (we observe (cid:100) MS,S (cid:48) 0 ).",
"Refer to Figure 1 for an illustration.",
"To operationalize the pairwise prediction of S and S (cid:48) , let the true pairwise label HS,S (cid:48) = sign ( HS,S (cid:48) ) (7) be defined as the central quantity of interest.",
"Define the human predicted pairwise label as (cid:91) HS,S (cid:48) = sign ( (cid:100) HS,S (cid:48) ) (8) and likewise for the true and estimated predictions MS,S (cid:48) and (cid:91) MS,S (cid:48) w.r.t. to a metric M. The 0-1 clas-sification loss for metric M on this example is L ( HS,S (cid:48) , (cid:91) MS,S (cid:48) ) = I [ HS,S (cid:48) (cid:54) = (cid:91) MS,S (cid:48) ] (9) and the pairwise error of an estimator is the loss incurred averaged over all pairwise examples.",
"Ideally, we could calculate the true error of M Err true ( M ) = ES [ L ( HS,S (cid:48) , MS,S (cid:48) )] (10) but we can only compute an error of M with noisy human labels and metric scores estimated from finite sized test sets Err obs ( M ) = EX , S [ L ( (cid:91) HS,S (cid:48) , (cid:92) MS,S (cid:48) )] (11) which is typically estimated when we calculate metric pairwise accuracy from our datasets.",
"Data.",
"We use the past 4 years of to-English translation data from the WMT metrics shared task (Bojar et al., 2016b, 2017; Ma et al., 2018, 2019).",
"3 Across all years and language pairs, there are 261 MT systems.",
"Pairs of MT systems are extracted within each year, within each language pair, resulting in 1324 pairwise examples.",
"For each output of an MT system, there are one or more humans judgements and one reference for metric scoring.",
"1306-5117 outputs were collected for each MT system totaling 3 The WMT20 metrics shared task data was not publicly available at the time of submission.",
"about 1312-5612 judgments, depending on the year and language pair.",
"For ease of interpretation, we always use raw direct assessment judgments which range from 0-100.",
"Metrics.",
"We evaluate the performance of the three metrics included in SacreBleu (BLEU, TER, chrF; Post, 2018; Koehn et al., 2007).",
"These three have also participated in every year of the metrics task as baselines.",
"In addition, we include two recently developed metrics: BERTSCORE (Zhang et al., 2020) and BLEURT (Sellam et al., 2020).",
"Both metrics are found to effectively utilize contextual embeddings (Devlin et al., 2019), and BLEURT is a learned metric (tuned on data outside of WMT2019).",
"For all metrics, we use the default settings for scoring.",
"Since BLEURT is trained on WMT15-18, we test it only on WMT2019 pairs.",
"Data.",
"The SummEval dataset (Fabbri et al., 2020) contains 100 outputs from 17 summarization systems.",
"This results in 136 pairwise examples.",
"For each system output, 3 expert judgments, and 11 references for metric scoring.",
"Each summarization is judged in four categories from 0-5: coherence, consistency, fluency, and relevance.",
"To compute system-level human scores for a system, we first average over categories for an aggregate expert score, and then average the aggregated expert scores per system.",
"Metric scores for system outputs were computed against as many references as possible.",
"Metrics.",
"We evaluate the performance of several metrics that were found to be effective at the system-level in Fabbri et al. (2020).",
"This includes the traditional ROUGE-4 (Lin, 2004) summarization metric, its extension ROUGE-WE (Ng and Abrecht, 2015), and METEOR (Lavie and Agarwal, 2007).",
"In addition, we include two metrics based on BERT (Devlin et al., 2019).",
"BertScore (Zhang et al., 2020), also present in the WMT analysis, and SUPERT (Gao et al., 2020), which is a reference-less metric for summarization.",
"Two sources of variation distinguish the observed pairwise error (11) from the true error in (10) the noise in the human predicted labels due to finite judgements, and the variance in the metric due to finite test sets.",
"Approximating (11) is straightforward with the bootstrap, but disentangling the error from these two sources of variation requires more care.",
"With the bias-variance-noise decomposition, we can adjust our observed error estimates to the noise-free, infinite test set setting of the true error.",
"The bias-variance-noise decomposition due to Domingos (2000) decomposes the observed pairwise error in (11) w.r.t. two constant labels for any pairwise example on systems S, S (cid:48) S : The true pairwise label for this example is",
"H S,S (cid:48) := arg min y { 1 , 1 } EX [ L ( (cid:91) HS,S (cid:48) , y )] (12)",
"and the estimator that produces these true labels has, by definition, the lowest observed error.",
"In the decomposition, the human predicted label noise and metric bias is defined relative to the true labels.",
"Assuming the central limit theorem (proof in Appendix A), we actually have H S,S (cid:48) = HS,S (cid:48) as defined in eq.",
"(5).",
"M S,S (cid:48) = arg min y { 1 , 1 } EX [ L ( (cid:91) MS,S (cid:48) , y )] (13)",
"and we assume that the metric prediction converges onto the main prediction as the test data",
"increases for S and S (cid:48) (empirically validated in Appendix B).",
"In the decomposition, the metric variance is defined relative to the main prediction.",
"Starting from the loss incurred by M on this pairwise example, the decomposition gives us EX [ L ( (cid:91) HS,S (cid:48) , (cid:91) MS,S (cid:48) )] = c 0 Noise ( (cid:91) HS,S (cid:48) ) (14) + Bias ( (cid:91) MS,S (cid:48) ) + c 1 Var ( (cid:91) MS,S (cid:48) ) where Noise ( (cid:91) HS,S (cid:48) ) = E [ L ( (cid:91) HS,S (cid:48) , H S,S (cid:48) )] where the noise is an irreducible loss incurred by computing pairwise accuracy to the human predicted labels instead of the true labels.",
"Note that this noise term also exactly corresponds to the lowest achievable observable error (see 4.2).",
"Bias ( (cid:91) MS,S (cid:48) ) = L ( H S,S (cid:48) , M S,S (cid:48) ) where the bias is 0 if the main prediction is correct (w.r.t. to the true label), and 1 otherwise.",
"Note that this term is also the true error of a metric estimator in a noise-free, infinite test set setting.",
"For unbiased estimators this term is zero, as their main prediction matches the true label.",
"Var ( (cid:91) MS,S (cid:48) ) = E [ L ( (cid:91) MS,S (cid:48) , M S,S (cid:48) )] where the variance is a likelihood that the estimator deviates from its main prediction under random sampling.",
"c 0 = 2 PX ( (cid:91) MS,S (cid:48) = H S,S (cid:48) ) 1 which means that the influence of label noise on the error becomes small if the estimator prediction are close to random chance.",
"When the estimator gives constant predictions, the sign of c 0 is dependent on the estimator's correctness.",
"c 1 = 1 if (cid:91) MS,S (cid:48) = H S,S (cid:48) and c 1 = 1 otherwise.",
"Variance can both increase and decrease the observed error.",
"If the estimator is unbiased, the variance causes the prediction to from the correct main prediction.",
"On the other hand, for a biased estimator, deviation from its incorrect main prediction occasionally decreases the error.",
"Unlike the decomposition for mean squared error, the interaction between the c 0 and Var terms only allows the error of two hypothetical settings to be read off directly from the table: when Noise 0 , corresponding to estimator error when computed against the ground truth; or when Noise + Var 0 , when the ground truth is used and metrics have access to an infinite test set for scoring.",
"By definition the constant estimator that produces the true pairwise labels H S,S (cid:48) (defined in (12)) for each pairwise example has the lowest possible observable error.",
"The observable error of this optimal estimator is exactly E [ L ( (cid:91) HS,S (cid:48) , H S,S (cid:48) )] = Noise ( (cid:91) HS,S (cid:48) ) .",
"Since this estimator is constant it has no variance, and since it is instantiated by definition it has no bias.",
"Analytically, the observed error of any estimator is lower bounded by Noise ( (cid:91) HS,S (cid:48) ) and is the agreement of our human predicted labels with the ground truth.",
"Assuming the bootstrap (Efron and Tibshirani, 1993) which is a common procedure in NLP (Dror et al., 2018), we can estimate the expectation quantities in the decomposition.",
"By assuming that sampling with replacement from our datasets approximates real sampling, we can repeatedly simulate the quantity in an expectation.",
"Taking the mean over trials gives the bootstrap estimate of the expectation.",
"We emphasize that this is a regular application of a widely accepted techniquethe bootstrap assumption allows us to study problems that would be impossible due to the cost of repeat experiments.",
"The following analyses refer to the error components (averaged over all examples) from the simulated decomposition presented in Table",
"1. The noise component almost always accounts for a small fraction of the total error.",
"We found this to be counterintuitivewhile the lowest observable error (optimal predictions, see 4.2) incur about 5% error on both datasets, the influence of the noise is much smaller than those errors suggest.",
"For the constant c 0 scaling the noise, c 0 = 0 if the metric prediction is near random.",
"Since the c 0 Noise term on average is small two cases hold true: when humans are uncertain about the example (noise term large) metrics are as well ( c 0 term small), and when metrics are certain about the examples ( c 0 term large) humans are as well (noise term small).",
"The second case empirically shows studying the sampling distribution of metrics (Koehn, 2004; Berg-Kirkpatrick et al., 2012) is effective, as metric certainty in the difference of system quality often implies human certainty.",
"Metric variance introduces little to the pair-100 200 300 400 500 600 700 Number of judgments 0.09 0.13 0.10 E rr t r u e ( M ) BERTs BLEU F Human Perfect annotator Figure 2: Comparison of metrics to human and perfect annotator estimators with varying number of judgments in WMT.",
"Alternatively, metrics stand to gain little from using more test set examples.",
"In MT, dropping both the noise and variance components for the error results in at most a 1 or 2 percent reduction in the observed error (see 9 for the implications in metrics research).",
"Metrics generally have low variance, so at the test set sizes of WMT and SummEval, they are likely to converge to their main predictions.",
"In 4, several MT metrics approach the error of the WMT human evaluation.",
"The WMT human evaluation is expensive, using thousands of judgments per translation system.",
"While each human judgment has associated monetary cost, once a large test set is collected, running metrics only incurs computational cost.",
"This section explores this asymmetry, and seeks to understand how much metric predictions are worth, in terms of human judgments.",
"We wish to give our best comparison between metrics and unbiased estimators (humans or the perfect annotator).",
"Ideally, metrics would be given their best chance to perform, by using an infinite test set.",
"With the decomposition, we can adjust metric errors estimates to a noise-free and infinite test set setting by taking only their bias component.",
"For human and perfect annotator estimators, we can adjust their errors to a noise-free setting by taking only the variance component.",
"The following sections compare these adjusted errors.",
"While we can estimate the lower bound to the pairwise error for a given dataset (in 4.2), it is achieved by a constant estimator using system-level ground truth.",
"Comparing segment-level metrics against the unbiased perfect annotator, or the best scorer at the segment-level, is more informative.",
"At the high-level, we can simulate scoring with the perfect annotator at n judgments using the human estimator at n (cid:48) > n judgments to match the variance of the perfect annotator estimator.",
"Let's start from the unbiased human estimator (cid:99) HS (2).",
"Recall that the estimator is a sample mean, so its variance is Var ( (cid:99) HS ) = Var ( H ( x )) /n .",
"An insight from Chaganty et al. (2018) gives us the decomposition of the variance of H ( x ) Var ( H ( x )) = Var ( E [ H ( x ) | x ]) (15) + E [ Var ( H ( x ) | x )] with the law of total variance.",
"In words, the variance term can be thought of as the variance of each output sentence's true quality score (some translations produced by S are better than others) and the expectation term is the noise introduced by the humans when estimating the quality of a sentence (human scores have mean 0 noise around an out-put's true quality score).",
"One intuition is that even if a perfect annotator gives the correct score for each sentence, every time, there is still some unavoidable variance in the estimator due to the variance of the hypothetical quality scores for each output.",
"To formalize this notion, let P ( x ) = E [ H ( x ) | x ] be the human scoring function of a perfect annotator, and the estimator (cid:99) PS be an empirical mean of n independent samples from P ( x ) similar to eq.",
"(1).",
"As a sample mean, Var ( (cid:99) PS ) = Var ( P ( x )) /n .",
"Relating this to (15) Var ( H ( x )) = E [ Var ( H ( x ) | x )]+ Var ( P ( x )) (16) and while Var ( P ( x )) is not directly observable, we can calculate Var ( H ( x )) with the sample variance on all the human judgments, and E [ Var ( H ( x ) | x )] with a pooled variance over variances from repeat human judgments on the same output sentence.",
"Our final step considers the efficiency ratio r = Var ( H ( x )) / Var ( P ( x )) .",
"If we are interested in the perfect annotator estimator at n judgments, the hu-1 2 3 4 5 Difference in system quality 25 23 21 19 17 15 T r u e m e a n s t d .",
"This completes our reasoning that for scoring on the system-level, sampling n (cid:48) = nr human judgments is nearly equivalent to sampling n perfect annotator judgments.",
"See Appendix C for step-by-step derivations for the perfect annotator variance in our datasets.",
"The following analyses refer to the comparison of metric estimators to unbiased estimators at varying number of judgments for WMT in Figure",
"2. Judgments from the perfect annotator have low variance, like those of professional linguists.",
"While we do not have data from professional linguists, we can qualitatively compare them to the perfect annotator.",
"A growing body of MT literature focuses on professional linguists (Freitag et al., 2020; Mathur et al., 2020b), and there are at least two known properties of their judgments: their judgments have better interannotator agreement (contain less noise), and they are more sensitive to linguistic phenomena.",
"The perfect annotator has no noise, as they assign a constant score to each segment.",
"However, the perfect annotator in WMT is better described as a noiseless crowdworker.",
"With the biases of crowdworkers, the perfect annotator may not share the sensitivity property, and our use of crowdworkers may be biased w.r.t. professional linguists.",
"In terms of average pairwise error, MT metrics have an equivalence to a high number of human judgments.",
"Since the error of the human estimator monotonically increases as the number of judgments decrease, each MT metric has a breakeven point.",
"Metrics outperform human estimators using judgments below this threshold.",
"BERTSCORE is as accurate as using a human estimator with 600 judgments per system, or the perfect annotator estimator with 300 judgments, across the WMT dataset.",
"We highlight the statistical advantage in variance many metrics share, and that this advantage offers a possibility that metrics can outperform humans, determined by which human estimator the metric is compared against.",
"This is a consequence of the general fact that humans are unbiased, high-variance estimators, and metrics are biased, low-variance estimators, as depicted in Figure",
"1. For metrics such as BERTSCORE or CHRF, the bias is low as well, which gives it remarkably good error properties.",
"The perfect annotator provides optimistic figures for human annotation, providing the best performance for a fixed number of judgments, and requiring the least judgments for a fixed performance.",
"In 5, we saw that the perfect annotator is weak at low number of judgments, due to its non-zero variance.",
"In this section we identify another consequence of the perfect annotator's variance, where estimating small differences in system quality is hard.",
"The performance of an unbiased estimator is dependent on their variance and the effect size it is trying to detect.",
"This section performs a power analysis to determine how much annotator effort is needed to reliably detect the correct pairwise judgment between two systems (Card et al., 2020).",
"To make an optimistic estimate, we assume our annotator variance is close to that of a perfect annotator.",
"We make two assumptions to apply a basic power analysis for the estimation of the difference of system quality between two systems: normality and equal variance across groups.",
"For parameters = 0 .",
"05 (false positive rate) and = 0 .",
"95 (false negative rate), we can analytically compute the number of judgments needed to ensure our pairwise judgment is at least (1 ) 90% accurate.",
"Table 2 contains power analyses for different instantiations of annotator variance and effect size.",
"In WMT, detecting a difference of 1 point requires at least 10K perfect annotator judgments, for different instantiations of its variance.",
"To put this in perspective, the top 5 zh-en translation systems in WMT19 differed by less than 3 points (Barrault et al., 2019).",
"Depending on how much is paid per judgment, this cost can quickly become infeasible.",
"Here, the merit of such a task may be argued, as knowing a small difference exists between two systems may not always be productive.",
"From a scientific perspective, many NLG techniques will yield small improvements, and not being able to detect small differences means we will not know whether these techniques are useful.",
"Since metrics tend to have lower variance, metrics often achieve significance in estimating the difference of system qualities, when humans cannot.",
"For instance, BERTSCORE achieves significance in estimating quality differences over half of the pairwise examples where humans do not (see Appendix E).",
"In extreme cases, human evaluation is nearly as bad as flipping a coin, but the metric can still offer a consistent prediction between two systems.",
"7 Caveats to the analysis Our analysis assumes that the human judgments are unbiased.",
"When comparing systems similar in quality, practitioners must accept that the number of possible analyses are limited.",
"In ablation studies where similar systems are often compared, metrics may be our only insight into system performance.",
"With white-box metrics such as BLEU, value can be derived from qualitative insight (e.g. systems with high BLEU score have high n -gram overlap with the reference set).",
"In addition, we may qualitatively analyze output statistics not intended to correlate with humans judgment at all (Neubig et al., 2019).",
"In WMT16-19, direct assessment (Gra-ham et al., 2013) was used to elicit judgments from a combination of crowdworkers and researchers.",
"Direct assessment (DA) uses an adequacy evaluation prompt (Rate how much you agree that the output translation adequately expresses the meaning of the reference translation) and asks contributors to rate on a 0-100 scale.",
"The unbiased ground truth is not a fixed goalpost.",
"A number of factors are known to change the eventual ranking of translation systems with human scoring.",
"Employing a different collection methodology, such as human translation edit rate (HTER) of instead of DA, can result in divergent system rankings (Graham et al., 2016).",
"In an earlier edition of WMT, DA judgments were collected with both a grammaticality prompt and an adequacy prompt, corresponding to different system rankings by the respective attribute (Bojar et al., 2016a).",
"Several studies have shown scoring differences between professional linguists and crowdworkers which are due in part to the fact that linguists are more sensitive to linguistic phenomena (Fabbri et al., 2019; Freitag et al., 2019).",
"The goals of an evaluation should be decided by the practitioner.",
"We do not give suggestions on any particular goals, and practitioners should understand what their application is, and which evaluation is the best approximation (refer to Gatt and Krahmer, 2018).",
"Unfortunately, since the existing data in this domain is limited, our analyses are limited as well.",
"However, the statistical techniques apply to any empirical method.",
"We hope that our analysis inspires others to think about statistical limits in this domain.",
"To push the limits of what can be evaluated, we need to improve on fundamental aspects of human evaluation.",
"On the human side, we may focus on creating larger effect sizes or reducing noise by adopting new annotation schemes (Laubli et al., 2018; Shapira et al., 2019) or employing professional linguists (Fabbri et al., 2020; Toral et al., 2018).",
"To make the human estimator more effi-cient, we may consider adaptive data collection techniques to stop data collection early when significance is achieved, in a statistically sound manner (Johari et al., 2017).",
"Strategies combining human and metric evaluation are also shown to have potential.",
"Variance reduction techniques can be applied to the human estimator by taking advantage of strong metrics (Chaganty et al., 2018).",
"Another bottleneck in human evaluation is in the random sampling of the test set.",
"Metrics could form the basis of an importance sampling procedure to choose test sets that would best differentiate two systems, as a form of robust evaluation (Chaganty et al., 2017).",
"On the metric side, if we can reliably estimate metric bias, we can skip human evaluation altogether when the metric is known to be good.",
"Probabilistic reinterpretations of current metrics could be a useful technique for confidence estimation (Keith and O'Connor, 2018).",
"Optimistically, metrics could have provable guarantees, ensuring the correctness of metric decisions (Jia et al., 2019).",
"We reinterpret problems in evaluating metrics with correlation (2.2) as a set of guidelines for metrics research.",
"To next year's organizers of the WMT metrics shared task and the broader metrics community we suggest the following: (1) Pairwise accuracy has desirable properties as an evaluation measure for metrics.",
"Our bias-variance-noise decomposition shows that the observed pairwise accuracy is very close to the true pairwise accuracy from a noise-free, infinite test set setting (4.4).",
"We suggest the use of pairwise accuracy as it reflects metric performance well (which may be verified using this analysis).",
"As a normalized form of pairwise accuracy, Kendall's is also a suitable measure.",
"(2) Since pairwise accuracy is computed against noisy human predictions, on average, it should be impossible for metrics to achieve a perfect accuracy.",
"We suggest providing an upper bound of metric performance (4.2) to clarify how much improvement is possible for metrics on the dataset.",
"The fact that a manual evaluation can be weak, and an automatic one can be better is gaining attention in the metrics community.",
"Mathur et al. (2020b) studied a disagreement between crowdworkers and metrics, and a reevaluation favored the metrics over the human prediction.",
"Recently, Freitag et al. (2021) shows that metrics can achieve higher agreement with professional linguists than crowdworkers in judging translation systems.",
"Their results fit into our formalization: if we assume professional linguists are unbiased, the bias and variance properties of metrics combined are superior to those of crowdworkers.",
"Our analysis assumes that crowdworkers are unbiased, where they assume professional linguists are instead.",
"We wish to highlight several works which inspired the elements of ours: Chaganty et al. (2018) and Hashimoto et al. (2019) formalize metrics as statistical estimators and provide understanding of their statistical properties and limits.",
"In the replication of ImageNet, Engstrom et al. (2020) found that dataset bias accounted for classifier performance differences between the original and the replicated dataset, and provide a decomposition for the sources of error.",
"In automated essay scoring, scorers are often evaluated against noisy human judgment, and Loukina et al. (2020) developed the PRMSE to calculate the MSE between scorer prediction and the true judgment, rather than noisy judgment.",
"Finally, in bioinformatics, Li et al. (2020) derive an upper bound of the R 2 coefficient due to experimental noise when regressing on experiment-derived results.",
"Through rigorous comparison between metrics, humans, and the perfect segment-level annotator, we identify the settings where metrics outperform humans due to a statistical advantage in variance.",
"These results challenge the notion that metrics are always secondary to human evaluation.",
"Instead, we encourage practitioners to understand when human evaluation is weak, and when metrics are necessary.",
"Finally, we hope to provide tools for analysis and future directions for evaluation.",
"Discussions with Nitika Mathur, Markus Freitag, and Thibault Sellam led to several insights.",
"Nelson Liu and Tianyi Zhang provided feedback on our first draft, and anonymous reviewers provided feedback on the submitted draft.",
"Nanyun Peng advised the first author, and on this work.",
"Alex Fabbri provided a scored version of the SummEval dataset.",
"We thank all who have made our work possible."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"result",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other",
"other"
] |
[
"Neural network architectures in natural language processing often use attention mechanisms to produce probability distributions over input token representations.",
"Attention has empirically been demonstrated to improve performance in various tasks, while its weights have been extensively used as explanations for model predictions.",
"Recent studies (Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019) have showed that it cannot generally be considered as a faithful explanation (Jacovi and Goldberg, 2020) across encoders and tasks.",
"In this paper, we seek to improve the faithfulness of attention-based explanations for text classification.",
"We achieve this by proposing a new family of Task-Scaling (TaSc) mechanisms that learn task-specific non-contextualised information to scale the original attention weights.",
"Evaluation tests for explanation faithfulness, show that the three proposed variants of TaSc improve attention-based explanations across two attention mechanisms, five encoders and five text classification datasets without sacrificing predictive performance.",
"Finally, we demonstrate that TaSc consistently provides more faithful attention-based explanations compared to three widely-used interpretability techniques.",
"1 1 Introduction Natural Language Processing (NLP) approaches for text classification are often underpinned by large neural network models (Cho et al., 2014; Devlin et al., 2019).",
"Despite the high accuracy and efficiency of these models in dealing with large amounts of data, an important problem is their increased complexity that makes them opaque and hard to interpret by humans which usually treat 1 Code is available at: https://github.com/ GChrysostomou/tasc.git them as black boxes (Zhang et al., 2018; Linzen et al., 2019).",
"Attention mechanisms (Bahdanau et al., 2015) produce a probability distribution over the input to compute a vector representation of the entire token sequence as the weighted sum of its constituent vectors.",
"A common practice is to provide explanations for a given prediction and qualitative model analysis by assigning importance to input tokens using scores provided by attention mechanisms (Chen et al., 2017; Wang et al., 2016; Jain et al., 2020; Sun and Lu, 2020) as a mean towards model interpretability (Lipton, 2016; Miller, 2019).",
"A faithful explanation is one that accurately represents the true reasoning behind a model's prediction (Jacovi and Goldberg, 2020).",
"A series of recent studies illustrate that explanations obtained by attention weights do not always provide faithful explanations (Serrano and Smith, 2019) while different text encoders can affect attention interpretability, e.g. results can differ when using a recurrent or non-recurrent encoder (Wiegreffe and Pinter, 2019).",
"A limitation of attention as an indicator of input importance is that it refers to the word in context due to information mixing in the model (Tutek and Snajder, 2020).",
"Motivated by this, we aim to improve the effectiveness of neural models in providing more faithful attention-based explanations for text classification, by introducing non-contextualised information in the model.",
"Our contributions are as follows: We introduce three Task-Scaling (TaSc) mechanisms ( 4), a family of encoder-independent components that learn task-specific non-contextualised importance scores for each word in the vocabulary to scale the original attention weights which can be easily ported to any neural architecture; We show that TaSc variants offer more robust, consistent and faithful attention-based explanations compared to using vanilla attention in a set of standard interpretability benchmarks, without sacrificing predictive performance ( 6); We demonstrate that attention-based explanations with TaSc consistently outperform explanations obtained from two gradient-based and a word-erasure explanation approaches ( 7).",
"Explanations for neural networks can be obtained by identifying which parts of the input are important for a given prediction.",
"One way is to use sparse linear meta-models that are easier to interpret (Ribeiro et al., 2016; Lundberg and Lee, 2017; Nguyen, 2018).",
"Another way is to calculate the difference in a model's prediction between keeping and omitting an input token (Robnik-Sikonja and Kononenko, 2008; Li et al., 2016b; Nguyen, 2018).",
"Input importance is also measured using the gradients computed with respect to the input (Kinder-mans et al., 2016; Li et al., 2016a; Arras et al., 2016; Sundararajan et al., 2017).",
"Chen and Ji (2020) propose learning a variational word mask to improve model interpretability.",
"Finally, extracting a short snippet from the original input text (rationale) and using it to make a prediction has been recently proposed (Lei et al., 2016; Bastings et al., 2019; Treviso and Martins, 2020; Jain et al., 2020; Chalkidis et al., 2021).",
"Nguyen (2018) and Atanasova et al. (2020) compare explanations produced by different approaches, showing that in most cases gradient-based approaches outperform sparse linear meta-models.",
"Attention weights have been extensively used to interpret model predictions in NLP; i.e. (Cho et al., 2014; Xu et al., 2015; Barbieri et al., 2018; Ghaeini et al., 2018).",
"However, the hypothesis that attention should be used as explanation had not been explicitly studied until recently.",
"Jain and Wallace (2019) first explored the effectiveness of attention explanations.",
"They show that adversary attention distributions can yield equivalent predictions with the original attention distribution, suggesting that attention weights do not offer robust explanations.",
"In contrast to Jain and Wallace (2019), Wiegreffe and Pinter (2019) and Vashishth et al. (2019) demonstrate that attention weights can in certain cases provide robust explanations.",
"Pruthi et al. (2020) also investigate the ability of attention weights to provide plausible explanations.",
"They test this through manipulating the attention mechanism by penalising words a priori known to be relevant to the task, showing that the predictive performance remain relatively unaffected.",
"Sen et al. (2020) assess the plausibility of attention weights by correlating them with manually annotated explanation heat-maps, where plausibility refers to how convincing an explanation is to humans (Jacovi and Goldberg, 2020).",
"However, Jacovi and Goldberg (2020) and Grimsley et al. (2020) suggest caution with interpreting the results of these experiments as they do not test the faithfulness of explanations (e.g. an explanation can be non-plausible but faithful or vice-versa).",
"Serrano and Smith (2019) test the faithfulness of attention-based explanations by removing tokens to observe how fast a decision flip happens.",
"Results show that gradient attention-based rankings (i.e. combining an attention weight with its gradient) better predict word importance for model predictions, compared to just using the attention weights.",
"Tutek and Snajder (2020) propose a method to improve the faithfulness of attention explanations when using recurrent encoders by introducing a word-level objective to sequence classification tasks.",
"Focusing also on recurrent-encoders, Mohankumar et al. (2020) introduce a modification to recurrent encoders to reduce repetitive information across different words in the input to improve faithfulness of explanations.",
"To the best of our knowledge, no previous work has attempted to improve the faithfulness of attention-based explanations across different encoders for text classification by inducing task-specific information to the attention weights.",
"In a typical neural model with attention for text classification; one-hot-encoded tokens x i PR | V | are first mapped to embeddings e i P R d , where i P r 1 , ..., t s denotes the position in the sequence, t the sequence length, | V | the vocabulary size and d the dimensionality of the embeddings.",
"The embeddings e i are then passed to an encoder to produce hidden representations h i Enc p e i q , where h i P RN , with N the size of the hidden representation.",
"A vector representation c for the entire text sequence x 1 , ..., x t is subsequently obtained as the sum of h i weighted by attention scores i : c i c i , c i h i i , c P RN (1) Vector c is finally passed to the output, a fully-connected linear layer followed by a softmax activation function.",
"To obtain representations h i , we consider the following recurrent, non-recurrent and Transformer (Vaswani et al., 2017) encoders, Enc p .",
"q , as in (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019):",
"(i) bidirectional Long Short-Term Memory ( LSTM ; Hochreiter and Schmidhuber (1997));",
"(ii) bidirectional Gated Recurrent Unit ( GRU ; Cho et al. (2014));",
"(iii) Convolutional Neural Network ( CNN ; LeCun et al. (1999));",
"(iv) Multi-Layer Per-ceptron ( MLP );",
"(v) BERT 2 (Devlin et al., 2019).",
"Attention scores ( a i ) are computed by passing the representations ( h i ) obtained from the encoder to the attention mechanism which usually consists of a similarity function followed by softmax:",
"where W is a trainable model parameter; and",
"(ii) Scaled Dot-Product (Dot; Vaswani et al. (2017) ): p h i , q q h Ti q ? N (4) 4 Task-Scaling (TaSc) Mechanisms Attention indicates how well inputs around a position i correspond to the output (Bahdanau et al., 2015). For example, in a bidirectional recurrent 2 We use BERT to obtain h i with an attention mechanism on top for consistency with the other encoders encoder each token representation h i contains information from the whole sequence so the attention weights actually refer to the input word in context and not individually (Tutek and Snajder, 2020). Inspired by the simple and highly interpretable bag-of-words models, which assign a single weight for each word type (word in a vocabulary), we hypothesise that by scaling each input word's contextualised representation c i (see Eq. 1) by its attention score and and a non-contextualised word type scalar score, we can improve attention-based explanations. The intuition is that by having a less contextualised sequence representation c we can reduce information mixing for attention. For that purpose, we introduce the non-contextualised word type score s x i in Eq. 1 to enrich the text representation c , such that: c i h i i s x i , c P RN (5) We compute s x i by proposing three Task-Scaling ( TaSc ) mechanisms. 3 4.1 Linear TaSc (Lin-TaSc) We first introduce Linear TaSc (Lin-TaSc), the simplest method in the family of TaSc mechanisms that estimates a scalar weight for each word in the vocabulary by introducing a new vector u P R | V | . Given the input sequence x r x 1 , . . . , x t s representing one-hot-encodings of the tokens, we perform a look up on u to obtain the scalar weights of words in the sequence. u is randomly initialised and updated partially at each training iteration, because naturally each input sequence contains only a small subset of the vocabulary words. We then obtain a task-scaled embedding e i for a token i in the input by multiplying the original token embedding with its word type weight u i : e i u i e i (6) The intuition is that the embedding vector e i was trained on general corpora and is a non-contextualised generic representation of input x i .",
"As such the score u i will scale e i to the task.",
"We subsequently compute context-independent scores s x i for each token in the sequence, by summing all elements of its corresponding task-scaled embedding e i ; s x i d e i in a similar way that token embeddings are averaged in the top-layers of a 3 Number of parameters for each proposed mechanism in Appendix B. neural architecture.",
"We opted to sum-up and not average, because we want to retain large and small values from the task-scaled embedding vector e i (Atanasova et al., 2020).",
"4 As the attention scores pertain to the word in context (Tutek and Snajder, 2020), we also expect the score s x i to pertain to the word without the contextualised information.",
"That way, we complement attention which results into a richer sequence representation c .",
"Lin-TaSc assigns equal weighting to all the dimensions of the word embedding e i (see Eq. 6), but some of them might be more important than others.",
"Inspired by the RETAIN mechanism (Choi et al., 2016), Feature-wise TaSc (Feat-TaSc) learns different weights for each embedding dimension to identify the most important of them.",
"Compared to Lin-TaSc where e i is scaled uniformly across all vector dimensions, with Feat-TaSc each dimension is scaled independently.",
"To achieve this, we introduce a learnable matrix U P R | V | d .",
"Similar to Lin-TaSc, given the input sequence x , we perform a look up on U to obtain U s r u 1 , . . . , u t s .",
"U is randomly initialised and updated partially at each training iteration.",
"To obtain s x i , we perform a dot product between u i and embedding vector e i ; s x i u i e i .",
"Lin-TaSc and Feat-TaSc weigh the original word embedding e i but do not consider any interactions between embedding dimensions.",
"Conv-TaSc addresses this limitation by extending Lin-TaSc.",
"5 We apply a CNN 6 with n channels over the scaled embedding e i from Lin-TaSc, keeping a single stride and a 1-dimensional kernel.",
"This way, we ensure that input words remain context-independent.",
"We then sum over the filtered scaled embedding e fi , to obtain the scores s x i ; s x i d e fi .",
"4 4 We also tried max and mean-pooling or using the u i directly instead of s i in early experimentation resulting in lower results.",
"Jacovi and Goldberg (2020) propose that an appropriate measure of faithfulness of an explanation can be obtained through erasure (the most relevant parts of the inputaccording to the explanation are removed).",
"We therefore follow this evaluation approach similar to Serrano and Smith (2019), Atanasova et al. (2020) and Nguyen (2018).",
"7 5.1 Attention-based Importance Metrics We opt using the following three input importance metrics by Serrano and Smith (2019): 8 : Importance rank corresponding to normalised attention scores.",
": Provides a ranking by computing the gradient of the predicted label y with respect to each attention score i in descending order, such that i B y B i .",
": Scales the attention scores i with their corresponding gradients i .",
"Decision Flip Most Informative Token: The average percentage of decision flips (i.e. changes in model prediction) occurred in the test set by removing the token with highest importance.",
"Decision Flip Fraction of Tokens: The average fraction of tokens required to be removed to cause a decision flip in the test set.",
"Note that we conduct all experiments at the input level (i.e. by removing the token from the input sequence instead of only removing its corresponding attention weight) as we consider the scores from importance metrics to pertain to the corresponding input token following related work (Arras et al., 2016, 2017; Nguyen, 2018; Vashishth et al., 2019; Grimsley et al., 2020; Atanasova et al., 2020).",
"We use five datasets for text classification following Jain and Wallace (2019):",
"(i) SST (Socher et al., 2013);",
"(ii) IMDB (Maas et al., 2011);",
"(iii) ADR 7 Note that Jacovi and Goldberg (2020) argue that a human evaluation is not an appropriate method to test faithfulness.",
"8 Serrano and Smith (2019) show that gradient-based attention ranking metrics ( , ) are better in providing faithful explanations compared to just using attention ( ).",
"Tweets (Sarker et al., 2015);",
"(iv) AG News; 9 and",
"(v) MIMIC Anemia (Johnson et al., 2016).",
"See Table 1 for detailed data statistics.",
"A prerequisite of interpretability is to obtain robust explanations without sacrificing predictive performance (Lipton, 2016).",
"Table 2 shows the macro F1-scores of all models across datasets, encoders and attention mechanisms using the three TaSc variants (Lin-TaSc, Feat-TaSc and Conv-TaSc described in Section 4) and without TaSc (No-TaSc).",
"10 In general, all TaSc models obtain comparable performance and in some cases outperform No-TaSc across datasets and attention mechanisms.",
"However, our main aim is not to improve predictive performance but the faithfulness of attention-based explanations, which we illustrate below.",
"Table 3 and Figure 1 present the mean average percentage of decision flips (higher is better) across attention mechanisms, encoders and datasets by removing the most informative token for TaSc variants and No-TaSc for all attention-based importance metrics (see Section 5).",
"In Table 3, we observe that TaSc variants are effective in identifying the single most important token, outperforming No-TaSc in 12 out of 18 cases across attention-based importance metrics.",
"This suggests that the attention mechanisms benefit from the non-contextualised information encapsulated in TaSc when allocating importance to the input tokens.",
"Models using Tanh without TaSc appear to produce on average a higher percentage of decision flips compared to those using the Dot mechanism.",
"Using either of the TaSc variants improves both 9 https://di.unipi.it/gulli/AG_corpus_ of_news_articles.html 10 For model hyper-parameters and prepossessing steps see Appendix A. 11 Lower predictive performance is observed with BERT in MIMIC, as BERT accepts a maximum of 512 word pieces as input.",
"See Appendix A. Data Enc() No-TaSc Lin-TaSc Feat-TaSc Conv-TaSc Dot Tanh Dot Tanh Dot Tanh Dot Tanh SST BERT .91 .90 .89 .88 .85 .88 .91 .91 LSTM .76 .75 .79 .79 .79 .80 .78 .77 GRU .76 .77 .79 .78 .80 .79 .77 .77 MLP .76 .76 .78 .78 .79 .78 .79 .79 CNN .76 .74 .80 .78 .80 .80 .78 .76 ADR BERT .80 .79 .78 .77 .79 .76 .78 .77 LSTM .74 .73 .75 .75 .74 .75 .73 .75 GRU .74 .73 .76 .75 .74 .76 .74 .75 MLP .74 .68 .75 .74 .75 .74 .75 .74 CNN .73 .69 .75 .74 .74 .75 .76 .75 IMDB BERT .93 .93 .93 .92 .92 .92 .93 .93 LSTM .89 .89 .88 .88 .88 .89 .89 .89 GRU .89 .90 .88 .88 .89 .89 .89 .89 MLP .88 .88 .88 .88 .88 .88 .89 .88 CNN .88 .88 .88 .88 .88 .88 .88 .89 AG BERT .94 .94 .94 .94 .94 .94 .94 .94 LSTM .92 .93 .92 .92 .92 .92 .92 .92 GRU .92 .92 .92 .92 .92 .92 .92 .92 MLP .92 .92 .92 .92 .91 .91 .92 .92 CNN .92 .92 .92 .92 .92 .92 .92 .92 MIMIC BERT 11 .82 .84 .82 .83 .83 .83 .83 .83 LSTM .87 .89 .87 .87 .88 .88 .88 .88 GRU .87 .89 .87 .88 .88 .88 .88 .88 MLP .87 .87 .87 .86 .86 .86 .87 .86 CNN .88 .89 .88 .87 .87 .87 .88 .88 Table 2: F1-macro average scores (3 runs) across datasets, encoders and attention mechanisms for models with and without TaSc (No-TaSc).",
"mechanisms, with Dot mechanism benefiting the most, making it comparable to Tanh.",
"For example, Dot moves from 8.2% with No-TaSc to 11.8% with Lin-TaSc, which is closer to 14.0% achieved by Lin-TaSc with Tanh (for ).",
"The first row of Figure 1 presents a comparison across encoders.",
"TaSc variants achieve improved performance over No-TaSc across all encoder variants with and .",
"All TaSc variants yield comparable results with the exception",
"of Conv-TaSc with BERT.",
"Results further suggest that non-recurrent encoders (MLP, CNN) without TaSc outperform recurrent encoders (LSTM, GRU) and BERT which has the poorest performance.",
"We hypothesise that this is due to the attention module becoming more important without feature contextualisation which is similar to findings of Serrano and Smith (2019) and Wiegreffe and Pinter (2019).",
"However, we observe that using any of the TaSc variants across encoders results into improvements with LSTM and GRU becoming comparable to MLP and CNN.",
"For example, BERT without TaSc improves from 5.7% to 8.0% (relative improvement 1.4x) and 9.3% (relative improvement 1.6x) using Lin-TaSc and Feat-TaSc respectively (for ).",
"Observing results in the second row of Figure 1, we see that TaSc variants outperform No-TaSc in all datasets when using and .",
"This highlights the robustness of TaSc as improvements are irrespective of the dataset.",
"In general, Lin-TaSc and Feat-TaSc perform equally well, however Lin-TaSc has the smaller number of parameters amongst the three variants.",
"Similar to the findings of Serrano and Smith (2019) best results overall, irrespective of the use of TaSc, are obtained using to rank importance.",
"Providing one token (i.e., the most informative) as an explanation is not always a realistic approach to assessing faithfulness.",
"In our second experiment, we test TaSc by measuring the fraction of important tokens required to be removed to cause a decision flip (change model's prediction).",
"Table 4 and Figure 2 show the mean average fraction of tokens required to be removed to cause a decision flip (lower is better) across attention mechanisms, encoders and datasets for all importance metrics.",
"In Table 4, we see that attention-based explanations from models trained with any of the TaSc mechanisms require on average a lower fraction of tokens to cause a decision flip compared to",
"No-(a)",
"TaSc (in 17 out of 18 cases).",
"Overall Lin-TaSc achieves higher or comparable relative improvements over Conv-TaSc and Feat-TaSc in 5 out of 6 times.",
"We present an across encoders comparison in the first row of Figure 2.",
"All three TaSc variants obtain comparable performance with the exception of Conv-TaSc with BERT.",
"We hypothesise that with BERT, Conv-TaSc fails to capture interactions between embedding dimensions due to perhaps higher contextualisation of BERT embeddings (i.e. contain more duplicate information).",
"Similarly to the previous experiment results suggest that nonrecurrent encoders (MLP and CNN) without TaSc outperform the remainder of encoders, with BERT having the worst performance.",
"This strengthens our hypothesis that attention becomes more important to a model with reduced contextualisation.",
"When using TaSc, performance across all encoders becomes comparable with the exception of BERT.",
"For example, GRU improves from .43 with No-TaSc to .16 with Lin-TaSc, .17 with Feat-TaSc and .18 with Conv-TaSc (for ).",
"The second row of Figure 2 presents results across datasets.",
"All three TaSc mechanims manage to outperform vanilla attention.",
"Lin-TaSc and Feat-TaSc perform comparably, with the first having a slight edge obtaining highest relative improvements in 3 out of 5 datasets with .",
"For example in ADR, No-TaSc requires on average .77 of all tokens to be removed for a decision flip to occur compared to .34 obtained by Lin-TaSc (for ).",
"The benefits of TaSc become evident when considering longer sequences.",
"For example in MIMIC, Lin-TaSc requires on average 44 tokens to cause a decision flip compared to 220 for No-TaSc.",
"We also perform a detailed comparison between the best performing TaSc variant (Lin-TaSc) and vanilla attention (No-TaSc) across all test instances.",
"Figure 3 shows box-plots with the median fraction of tokens required to be removed for causing a decision flip when ranking tokens by all three importance metrics.",
"For brevity we present results for four cases.",
"We notice that the median fraction of tokens required to cause a decision flip for Lin-TaSc using is higher compared to No-TaSc in certain cases.",
"However, Lin-TaSc results in consistently lower medians (with substantially reduced variances) compared to No-TaSc using and which are more effective importance metrics.",
"This is particularly visible in ADR using BERT, where the 25% and 75% percentiles are much closer to the median values, compared to No-TaSc.",
"Reduced variances suggest that the explanation faithfulness across instances remains consistent.",
"We finally compare explanations provided by using Lin-TaSc and to three standard non-attention input importance metrics without TaSc which are strong baselines for explainability (Nguyen, 2018; Atanasova et al., 2020).",
"Word Omission (WO) (Robnik-Sikonja and Kononenko, 2008; Nguyen, 2018): Ranking input words by computing the difference between the probabilities of the predicted class when including a word i and omitting it: WO i p p y | x q p p y | x z x i q InputXGrad ( x x ) (Kindermans et al., 2016; Atanasova et al., 2020): Ranking words by multiplying the gradient of the input by the input with respect to the predicted class: x i B y B x i Integrated Gradients ( IG ) (Sundararajan et al., 2017): Ranking words by computing the integral of the gradients taken along a straight path from a baseline input to the original input, where the baseline is the zero embedding vector.",
"Comparison Results Table 5 shows the results on decision flip (fraction of tokens removed) comparing the best performing attention-based importance metric ( ) with Lin-TaSc to Non-TaSc models with WO, x x and IG importance metrics across all encoders and datasets.",
"12 We observe that using with TaSc to rank word importance requires a lower fraction of tokens to cause a decision flip on average compared to WO, x x and IG without TaSc.",
"We outperform the other explanation approaches in 40 out of 50 cases, whilst obtaining comparable performance in other 5 cases.",
"This demonstrates the efficacy of TaSc in providing more faithful attention-based explanations than strong baselines without TaSc (Nguyen, 2018; Atanasova et al., 2020).",
"The improvements are particularly evident using BERT as an encoder.",
"In IMDB, WO with Tanh requires on average .23 of the tokens to be removed for a decision flip compared to just .07 for with TaSc.",
"We also observe that the attention-based importance metric ( ) with TaSc is a more robust explanation technique than non-attention based ones, obtaining lower variance in the fraction of tokens required to cause a decision flip across encoders.",
"For example with TaSc and Tanh requires a fraction of tokens in the range of .01-.05 compared to IG which requires .02-.43 in MIMIC, showing the consistency of our proposed approach.",
"Finally we observe that TaSc consistently improves non-attention based explanation approaches (WO, x x and IG) requiring a lower fraction of tokens to be removed compared to Non-TaSc across encoders, datasets and attention mechanisms in the majority of cases (see full results in Appendix E).",
"We finally examine qualitatively what type of information the parameter u from Lin-TaSc learns.",
"Similar to a bag-of-words model, our initial hypothesis is that u will assign high scores to the words that are most relevant to the task.",
"Figure 4 illustrates the 5 highest and lowest scored words from the IMDB and ADR datasets with a LSTM encoder and Dot attention and CNN encoder and Tanh attention respectively.",
"For brevity we include two examples, however observations hold similar throughout other configurations (e.g. encoders, datasets) and when increasing the number of top-k words.",
"We first observe in 4a, that indeed words expressing sentiment are assigned with high scores (e.g. excellent, waste, perfect ), either positive or negative.",
"However, a positive or negative sign does 12 We do not compare with LIME (Ribeiro et al., 2016) because WO and the gradient-based approaches outperform it (Nguyen, 2018; Atanasova et al., 2020).",
"not correspond to supporting the positive or negative class respectively.",
"For example withdrawal in ADR can be considered relevant to positive class, yet it is negatively scored.",
"Also sick can be considered a withdrawal symptom which is relevant to the negative class, yet it is positively scored.",
"We speculate that this happens due to the complex nonlinear relationships between the input words and the target classes learned by the model.",
"We introduced TaSc, a family of three encoder-independent mechanisms that induce context-independent task-specific information to attention.",
"We conducted an extensive series of experiments showing the superiority of TaSc over vanilla attention on improving faithfulness of attention-based interpretability without sacrificing predictive performance.",
"Finally, we showed that attention-based explanations with TaSc outperform other interpretability techniques.",
"For future work, we will explore the effectiveness of TaSc in sequence-to-sequence tasks similar to Vashishth et al. (2019).",
"We would like to thank the anonymous reviewers for their constructive and detailed comments that helped to improve the paper.",
"Nikolaos Aletras is supported by EPSRC grant EP/V055712/1, part of the European Commission CHIST-ERA programme, call 2019 XAI: Explainable Machine Learning-based Artificial Intelligence."
] | [
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"other"
] |
[
"The research of knowledge-driven conversational systems is largely limited due to the lack of dialog data which consists of multi-turn conversations on multiple topics and with knowledge annotations.",
"In this paper, we propose a Chinese multi-domain knowledge-driven conversation dataset, KdConv , which grounds the topics in multi-turn conversations to knowledge graphs.",
"Our corpus contains 4.5K conversations from three domains (film, music, and travel), and 86K utterances with an average turn number of 19.0.",
"These conversations contain in-depth discussions on related topics and natural transition between multiple topics.",
"To facilitate the following research on this corpus, we provide several benchmark models.",
"Comparative results show that the models can be enhanced by introducing background knowledge, yet there is still a large space for leveraging knowledge to model multi-turn conversations for further research.",
"Results also show that there are obvious performance differences between different domains, indicating that it is worth further explore transfer learning and domain adaptation.",
"The corpus and benchmark models are publicly available 1 .",
"It has been a long-term goal of artificial intelligence to deliver human-like conversations, where background knowledge plays a crucial role in the success of conversational systems (Shang et al., 2015; Li et al., 2016a; Shao et al., 2017).",
"In task-oriented dialog systems, background knowledge is defined as slot-value pairs, which provides key information for question answering or recommendation, and has been well defined and thoroughly studied (Wen et al., 2015; Zhou et al., 2016).",
"In Equal contribution Corresponding author: Minlie Huang.",
"open-domain conversational systems, it is important but challenging to leverage background knowledge, which is represented as either knowledge graphs (Zhu et al., 2017; Zhou et al., 2018a) or unstructured texts (Ghazvininejad et al., 2018), for",
"making effective interactions.",
"Recently, a variety of knowledge-grounded conversation corpora have been proposed (Zhou et al., 2018b; Dinan et al., 2018; Moghe et al., 2018; Moon et al., 2019; Wu et al., 2019; Liu et al., 2018; Tuan et al., 2019; Qin et al., 2019) to fill the gap where previous datasets do not provide knowledge grounding of the conversations (God-frey et al., 1992; Shang et al., 2015; Lowe et al., 2015).",
"CMU DoG (Zhou et al., 2018b), India DoG (Moghe et al., 2018), and Wizard of Wikipedia (Dinan et al., 2018) demonstrate attempts for generating informative responses with topic-related Wikipedia articles.",
"However, these datasets are not suitable for modeling topic transition or knowledge planning through multi-turn dialogs based on the relations of topics.",
"OpenDialKG (Moon et al., 2019) and DuConv (Wu et al., 2019) use knowledge graphs as knowledge resources.",
"Nevertheless, the number of topics is limited to one (Moon et al., 2019) or two (Wu et al., 2019), which is not sufficient for diversified topic transition in humanlike conversations.",
"Therefore, these knowledge-grounded dialog datasets still have limitations in modeling knowledge interactions 2 in multi-turn conversations.",
"In this paper, we propose KdConv , a Chinese multi-domain dataset towards multi-turn K nowledged riven Conv ersation, which is suitable for modeling knowledge interactions in multi-turn human-like dialogues, including knowledge planning, knowledge grounding, knowledge adaptations, etc.",
"KdConv contains 86K utterances and 2 Refer to knowledge planning, knowledge grounding, knowledge adaptations in dialog systems.",
"4.5K dialogues in three domains, 1.5K dialogues for each domain (an example is shown in Figure 1).",
"Each utterance is annotated with related knowledge facts in the knowledge graph, which can be used as supervision for knowledge interaction modeling.",
"Furthermore, conversations of KdConv contain diversified topics ranged from one to four, without any pre-defined goals or constraints, which are closer to real human-human conversations than other datasets.",
"The relations of topics are explicitly defined in the knowledge graph.",
"Moreover, KdConv covers three domains, including film, music, and travel, which can be used to explore knowledge adaptation between different domains.",
"We provide a benchmark to evaluate both generation-and retrieval-based conversational models on the proposed dataset with/without access to the corresponding knowledge.",
"Results show that knowledge grounding contributes to the improvement of these models, while existing models are still not strong enough to deliver knowledge-coherent conversations, indicating a large space for future work.",
"We collect a new dataset, KdConv, for knowledge-driven conversation generation in Chinese.",
"KdConv contains 86K utterances and 4.5K dialogues in three domains (film, music, and travel).",
"The average turn number is about 19, remarkably longer than those in other corpora.",
"KdConv provides a benchmark to evaluate the ability of generating conversations with access to the corresponding knowledge in three domains.",
"The corpus can empower the research of not only knowledge-grounded conversation generation, but also domain adaptation or transfer learning between similar domains (e.g., from film to music) or dissimilar domains (e.g., from music to travel).",
"We provide benchmark models on this corpus to facilitate further research, and conduct extensive experiments.",
"Results show that the models can be enhanced by introducing background knowledge, but there is still much room for further research.",
"The corpus and the models are publicly available 3 .",
"Recently, open-domain conversation generation has been largely advanced due to the increase of publicly available dialogue data (Godfrey et al., 1992; Ritter et al., 2010; Shang et al., 2015; Lowe et al., 2015).",
"However, the lack of annotation of background information or related knowledge results in significantly degenerated conversations, where the text is bland and strangely repetitive (Holtzman et al., 2019).",
"These models produce conversations that are substantially different from those humans make, which largely rely on background knowledge.",
"To facilitate the development of conversational models that mimic human conversations, there have been several knowledge-grounded corpora proposed.",
"Some datasets (Zhou et al., 2018b; Ghazvininejad et al., 2018; Liu et al., 2018; Tuan et al., 2019; Qin et al., 2019) collect dialogues and label the knowledge annotations using NER, string match, artificial scoring, and filtering rules 3 https://github.com/thu-coai/KdConv based on external knowledge resources (Liu et al., 2018).",
"However, mismatches between dialogues and knowledge resources introduce noises to these datasets.",
"To obtain the high-quality knowledge-grounded datasets, some studies construct dialogues from scratch with human annotators, based on the unstructured text or structured knowledge graphs.",
"For instance, several datasets (Zhou et al., 2018b; Dinan et al., 2018; Gopalakrishnan et al., 2019) have human conversations where one or both participants have access to the unstructured text of related background knowledge, while OpenDialKG (Moon et al., 2019) and DuConv (Wu et al., 2019) build up their corpora based on structured knowledge graphs.",
"In Table 1, we present a survey on existing human-labeled knowledge-grounded dialogue datasets.",
"CMU DoG (Zhou et al., 2018b) utilizes 30 Wikipedia articles about popular movies as grounded documents, which explores two scenarios: only one participant has access to the document, or both have.",
"Also using Wikipedia articles, however, Wizard of Wikipedia (WoW) (Dinan et al., 2018) covers much more dialogue topics (up to 1,365), which puts forward a high demand for the generalization ability of dialog generation models.",
"One other difference from CMU DoG is that in WoW, only one participant has access to an information retrieval system that shows the worker paragraphs from Wikipedia possibly relevant to the conversation, which is unobservable to the other.",
"In addition to the unstructured text, India DoG (Moghe et al., 2018) uses fact tables as background resources.",
"The idea of using structured knowledge to construct dialogue data is also adopted in OpenDialKG (Moon et al., 2019), which has a similar setting to KdConv.",
"OpenDialKG contains chit-chat conversations between two agents engaging in a dialog about a given topic.",
"It uses the Freebase knowl-Domain Film Music Travel Total # entities 7,477 4,441 1,154 13,072 # start 559 421 476 1,456 # extended 6,917 4,020 678 11,615 # relations 4,939 4,169 7 9,115 # triples 89,618 56,438 10,973 157,029 Avg.",
"edge base (Bast et al., 2014) as background knowledge.",
"In OpenDialKG, the entities and relations that are mentioned in the dialog are annotated, and it also covers multiple domains (film, books, sports, and music).",
"However, the limitation is that there are much fewer turns in a conversation, and the whole dialogue is restricted to only one given topic, which is not suitable for modeling topic transition in human-like conversations.",
"To the best of our knowledge, DuConv (Wu et al., 2019) is the only existing Chinese human-labeled knowledge-grounded dialogue dataset.",
"DuConv also utilizes unstructured text like short comments and synopsis, and structured knowledge graphs as knowledge resources.",
"Given the knowledge graph, it samples two linked entities, one as the transi-tional topic and the other as the goal topic, to construct a conversation path.",
"This path is used to guide participants toward the goal of the dialogue, which, as argued in Wu et al. (2019), can guide a model to deliver proactive conversations.",
"However, the existence of the target path is inconsistent with an open dialogue in reality because humans usually do not make any assumption about the final topic of a conversation.",
"Beyond that, the knowledge graph and the goal knowledge path are only annotated for the whole dialogue, which cannot provide explicit supervision on knowledge interactions for conversational models.",
"KdConv is designed to collect open-domain multiturn conversations for modeling knowledge interactions in human-like dialogues, including knowledge planning, knowledge grounding, knowledge adaptations, etc.",
"However, the open-domain background or commonsense knowledge is too large in scale (e.g., there are over 8 million concepts and 21 million relations in ConceptNet (Speer and Havasi, 2013)).",
"Thus, it is costly and time-consuming to Domain Film Music Travel Total # dialogues 1,500 4,500 # dialogues in Train/Dev/Test 1,200/150/150 3,600/450/450 # utterances 36,618 24,885 24,093 85,596 Avg.",
"collect multi-turn conversations from scratch based on such large-scale knowledge.",
"KdConv is proposed as one small step to achieve this goal, where we narrowed down the scale of background knowledge to several domains (film, music, and travel) and collected conversations based on the domain-specific knowledge.",
"KdConv contains similar domains (film and music) and dissimilar domains (film and travel) so that it offers the possibility to investigate the generalization and transferability of knowledge-driven conversational models with transfer learning or meta learning(Gu et al., 2018; Mi et al., 2019).",
"In the following subsections, we will describe the two steps in data collection: (1) Constructing the domain-specific knowledge graph; (2) Collecting conversation utterances and knowledge interactions by crowdsourcing.",
"As the sparsity and the large scale of the knowledge were difficult to handle, we reduced the range of the domain-specific knowledge by crawling the most popular films and film stars, music and singers, and attractions as start entities, from several related websites for the film 4 /music 5 /travel 6 domain.",
"The knowledge of these start entities contains both structured knowledge triples and unstructured knowledge texts, which make the task more general but challenging.",
"After filtering the start entities which have few knowledge triples, the film/music/travel domain contains 559/421/476 4 https://movie.douban.com/top250 5 https://music.douban.com/top250 6 https://travel.qunar.com/ p-cs299914-beijing-jingdian start entities, respectively.",
"After crawling and filtering the start entities, we built the knowledge graph for each domain.",
"Given the start entities as seed, we retrieved their neighbor entities within three hops from XLORE, a large-scale English-Chinese bilingual knowledge graph (Wang et al., 2013).",
"We merged the start entities and these retrieved entities (nodes in the graph) and relations (edges in the graph) into a domain-specific knowledge graph for film and music domains.",
"For the travel domain, we built the knowledge graph with the knowledge crawled only from the Web, because XLORE provides little knowledge for start entities in the travel domain.",
"There are two types of entities in the knowledge graph: one is the start entities crawled from the websites, the other is the extended entities that are retrieved from XLORE (film/music), or websites (travel) to provide related background knowledge.",
"The statistics of the knowledge graphs used in constructing KdConv are provided in Table",
"2. 3.2 Dialogue Collection We recruited crowdsourced annotators to generate multi-turn conversations that are related to the domain-specific knowledge graph without any pre-defined goals or constraints.",
"During the conversation, two speakers both had access to the knowledge graph rather than that only one participant had access to the knowledge, as proposed in WoW (Di-nan et al., 2018) where one party always leads the conversation with an expert-apprentice mode.",
"Allowing two participants to access the knowledge, in our corpus the two parties can dynamically change their roles, as either leader or follower, which is more natural and real to human conversations.",
"In addition to making dialogue utterances, the annotators were also required to record the related knowledge triples if they generated an utterance according to some triples.",
"To increase the knowledge exposure in the collected conversations, the annotators were instructed to start the conversation based on one of the start entities, and they were also encouraged to shift the topic of the conversation to other entities in the knowledge graph.",
"Thus, the topics of conversations and the knowledge interactions in KdConv are diversified and unconstrained.",
"In order to ensure the naturalness of the generated conversations, we filtered out low-quality dialogues, which contain grammatical errors, inconsistencies of knowledge facts, etc.",
"The distinct-4 score is Figure 2: Statistics of the number of dialogues where at least k ( k = 2 , 3 , 4) topics have been discussed in the first n turns.",
"0.54/0.51/0.42 for the film/music/travel domain, which is comparable to the score of DuConv (Wu et al., 2019), 0.46.",
"The distinct-4 score decreases, due to the decrease of knowledge triples and utterances in three domains, as shown in Table",
"3. 3.3 Corpus Statistics The detailed statistics of KdConv are shown in Table",
"3. We collect 1,500 dialogues for each domain.",
"The training, validation, and test sets are partitioned with the ratio of 8:1:1.",
"Note that the number of conversation turns in the film domain is larger than those in the music/travel domains (24.4 vs. 16.6/16.1), while the utterance lengths are similar (13.3 vs. 12.9/14.5 at the token level, and 20.4 vs. 19.5/22.9 at character level).",
"As aforementioned, the dialogues in the real world are not limited to one or two topics, while discussing multiple topics in depth usually requires a conversation having enough number of turns.",
"In order to verify this point, we analyze the relationship between the number of turns and the number of topics.",
"Note that the topics are defined as the distinct head entities in the knowledge triples and the central nodes with a degree greater than 1 in the knowledge graph.",
"The results of three domains are shown in Figure",
"2. Given a number k ( k = 2 , 3 , 4) of topics and a number n of conversation turns, we count the number of dialogues where at least k topics have been discussed in the first n turns.",
"It can be observed that more topics tend to appear in a dialogue only if there are enough conversation turns.",
"For instance, most dialogues involve at least 2 topics when the number of turns exceeds 15.",
"This is consistent with the fact that if a conversation is very short, speakers will not be able to discuss in detail, let alone natural transition between multiple topics.",
"To analyze topic transition in our dataset, we provide top-3 topic transition in the film domain, as shown in Table",
"4. As can be seen, topic transition has diverse patterns conditioned on different hops.",
"With the increase of the hops of topic transition, the complexity of topic transition goes up.",
"Compared to DuConv (Wu et al., 2019), the dialogues of KdConv contain multiple and diverse topics instead of fixed two topics, leading to diverse and complex topic transition, which are more suitable for the research of knowledge planning in human-like conversations.",
"Note that the relation Information appeared in the last row is different from the other relations, which means the target topic is mentioned in unstructured texts describing the information about the source topic.",
"The low frequency of the relation Information demonstrates that people prefer to shift the topic according to the structured relations rather than unstructured texts, as adopted in WoW (Dinan et al., 2018).",
"To provide benchmark models for knowledge-driven conversation modeling, we evaluated both generationand retrieval-based models on our corpus.",
"In order to explore the role of knowledge annotation, we evaluated the models with/without access to the knowledge graph of our dataset.",
"Language Model (LM) (Bengio et al., 2003): We trained a language model that maximizes the log likelihood: log P ( x ) = (cid:80) t log P ( x t | x <t ) , where x denotes a long sentence that sequentially concatenates all the utterances of a dialogue.",
"Seq2Seq (Sutskever et al., 2014): An encoder-decoder model augmented with attention mechanism (Bahdanau et al., 2014).",
"The input of the encoder was the concatenation of the past k 1 utterances, while the target output of the decoder was the k -th utterance.",
"k was set to 8 in the experiment.",
"If there were fewer than k 1 sentences in the dialogue history, all the past utterances would be used as input.",
"HRED (Serban et al., 2016): A hierarchical recurrent encoder-decoder model that has a specific context RNN to incorporate historical conversational utterances into a context state, which is used as the initial hidden state of the decoder.",
"The adapted model generates the k -th utterance based on the past k 1 utterances, where k was also set to 8, for fair comparison with Seq2Seq.",
"All the generative models were trained by optimizing the cross-entropy loss: L ( g ) 0 = 1 TT (cid:88) t =1 log P ( x t = x t ) , where x t denotes the predicted token at the time step t , while x t is the t -th token of the target sentence.",
"BERT (Devlin et al., 2019): We adapted this deep bidirectional transformers (Vaswani et al., 2017) as a retrieval-based model.",
"For each utterance (except the first one in a dialog), we extracted keywords in the same way as Wu et al. (2017) and retrieved 10 response candidates, including the golden truth based on the BM25 algorithm (Robertson et al., 1995).",
"The training task is to predict whether a candidate is the correct next utterance given the context, where a sigmoid function was used to output the probability score y = P ( y = 1) and the cross-entropy loss was optimized: L ( r ) 0 = y log y (1 y ) log(1 y ) , where y { 0 , 1 } is the true label.",
"A key-value memory module (Miller et al., 2016) is introduced to the aforementioned models to utilize the knowledge information.",
"We treated all knowledge triples mentioned in a dialogue as the knowledge information in the memory module.",
"For a triple that is indexed by i , we represented the key memory and the value memory respectively as a key vector k i and a value vector v i , where k i is the average word embeddings of the head entity and the relation, and v i is those of the tail entity.",
"We used a query vector q to attend to the key vectors k i ( i = 1 , 2 , ... ) : i = softmax i ( q T k i ) , then the weighted sum of the value vectors v i ( i = 1 , 2 , ... ) , v = (cid:80) i i v i , was incorporated into the decoding process (for the generation-based models, concatenated with the initial state of the decoder) or the classification (for the retrieval-based model, concatenated with the <CLS> vector).",
"For Seq2Seq, q is the final hidden state of the encoder.",
"For HRED, we treated the context vector as the query, while for BERT, the output vector of <CLS> was used.",
"Note that our dataset has a sentence-level annotation on the knowledge triples that each utterance uses.",
"To force the knowledge-aware models to attend to the golden KG triples, we added an extra attention loss (for retrieval-based models, this loss was computed only on the positive examples): L att = 1 |{ truth }| (cid:88) i { truth } log i , where { truth } is the set of indexes of triples that are used in the true response.",
"The total loss are the weighted sum of L ( l ) 0 and L att : L ( l ) tot = L ( l ) 0 + L att , l { g, r } .",
"Note that the knowledge-enhanced BERT was initialized from the fine-tuned BERT discussed in Section 4.1.2, and the parameters of the transformers were frozen during training the knowledge related modules.",
"The purpose was to exclude the impact of the deep transformers but only examine the potential effects introduced by the background knowledge.",
"We implemented the above models with Tensor-Flow (Abadi et al., 2016), PyTorch (Paszke et al.,",
"2017) and CoTK (Huang et al., 2020).",
"The Jieba Chinese word segmenter 7 was employed for tok-enization.",
"The 200-dimensional word embeddings were initialized by Song et al. (2018), while the unmatched ones were randomly sampled from a standard normal distribution N (0 , 1) .",
"The type of RNN network units was all GRU (Cho et al., 2014) and the number of hidden units of GRU cells were all set to 200.",
"ADAM (Kingma and Ba, 2014) was used to optimize all the models with the initial learning rate of 5 10 5 for BERT and 10 3 for others.",
"The mini-batch sizes are set to 2 dialogues for LM and 32 pairs of post and response for Seq2Seq and HRED.",
"We measured the performance of all the retrieval-based models using Hits@1 and Hits@3, same as Zhang et al. (2018) and Wu et al. (2019).",
"8 We adopted several widely-used metrics to measure the quality of the generated response.",
"We calculated Perplexity (PPL) to evaluate whether the generation result is grammatical and fluent.",
"BLEU-1/2/3/4 (Papineni et al., 2002) is a popular metric to compute the k -gram overlap between a generated sentence and a reference (Sordoni et al., 2015; Li et al., 2016b).",
"Distinct-1/2/3/4 (Li et al., 2016b) is also provided to evaluates the diversity of generated responses.",
"The results are shown in Table",
"5. We analyze the results from the following perspectives: The influence of knowledge: after introducing the knowledge, all the models were improved in terms of all the metrics except PPL in all the domains.",
"First, all the models obtain higher Hits@1 scores (in the music domain, BERT obtains an improvement of 0.4 on Hits@1).",
"After incorporating the knowledge into BERT, the performance of Hits@1 improves slightly, because the memory network which models knowledge information is rather shallow, compared to the deep structure in BERT.",
"Second, Seq2Seq and HRED both have better BLEUk scores (in the travel domain, Seq2Seq obtains an improvement of 7.2 on BLEU-4), which means a better quality of generated responses.",
"Third, the two generation-based models 7 https://github.com/fxsjy/jieba 8 For generative models, the rank is decided by the PPL values of candidate responses.",
"also gain larger Distinctk values (in the music domain, HRED obtains an improvement of 12.4 on Distinct-4), which indicates a better diversity of the generated results.",
"Comparison between models: In all the three domains, the knowledge-aware BERT model achieves the best performance in most of the metrics, as it retrieves the golden-truth response at a fairly high rate.",
"HRED performs best in BLEUk and Distinctk among all the generation-based baselines without considering the knowledge.",
"Knowledge-aware HRED has better results of BLEUk in the film and music domains and better results of Distinctk in the film domain, while the knowledge-enhanced Seq2Seq achieves the best Hits@1/3 scores among all the generation-based models.",
"Comparison between domains: For retrieval-based models, the performance is best in the film domain but worst in the travel domain, largely affected by the data size (see Table 3).",
"For generation-based models, however, the performance improves from the film domain to the travel domain, as the average number of utterances per dialogue decreases from 24.4 in the film domain to 16.1 in the travel domain (see Table 3).",
"The more utterances a dialogue contains, the more difficulties in conversation modeling for generation-based models.",
"Besides, the more diverse knowledge (1,837 entities and 318 relations in the film domain, vs. 699 entities and 7 relations in the travel domain) also requires the models to leverage knowledge more flexibly.",
"The difference between different domains can be further explored in the setting of transfer learning or meta learning in the following research.",
"To better understand the quality of the generated responses from the semantic and knowledge perspective, we conducted the manual evaluation for knowledge-aware BERT, knowledge-aware HRED, and HRED, which have achieved advantageous performance in automatic evaluation 9 .",
"Human annotators were asked to score a generated response in terms of the fluency and coherence",
"metrics.",
"The fluency score (rating scale is 0,1,2) is defined as whether the response is fluent and natural.",
"The coherence (rating scale is 0,1,2) is defined as whether a response is relevant and coherent to the context and the knowledge information.",
"We randomly sampled about 500 contexts from the test sets of the three domains and generated responses by each model.",
"These 1,500 context-response pairs in total and related knowledge graphs were presented to three human annotators.",
"We calculated the Fleiss' kappa (Fleiss, 1971) to measure inter-rater consistency.",
"Fleiss' kappa for Fluency and Coherence is from 0.37 to 0.74, respectively.",
"The overall 3/3 10 agreement for Fluency and Coherence is from 68.14% to 81.33% in the three domains.",
"The results are shown in Table",
"6. As can be seen, knowledge-aware BERT outperforms other models significantly in both metrics in all the three domains, which agrees with the results of automatic evaluation.",
"The Fluency is 2.00 because the retrieved responses are all human-written sentences.",
"The Fluency scores of both generation-based models are close to 2.00 (in the music domain, the Fluency of HRED is 1.90), showing that the generated responses are fluent and grammatical.",
"The Coherence scores of both HRED and knowledge-aware HRED are higher than 1.00 but still have a huge gap to 2.00, indicating that the generated responses are relevant to the context but not coherent to knowledge information in most cases.",
"After incorporating the knowledge information into HRED, the Coherence score is improved significantly in all the three domains, as the knowledge information is more expressed in the generated responses.",
"In this paper, we propose a Chinese multi-domain corpus for knowledge-driven conversation generation, KdConv.",
"It contains 86K utterances and 4.5K dialogues, with an average number of 19.0 turns.",
"Each dialogue contains various topics and sentence-level annotations that map each utterance with the related knowledge triples.",
"The dataset provides a benchmark to evaluate the ability to model knowledge-driven conversations.",
"In addition, KdConv covers three domains, including film, music, and travel, that can be used to explore domain adaptation or transfer learning for further research.",
"We provide generationand retrieval-based benchmark models to facilitate further research.",
"Extensive experiments demonstrate that these models can be enhanced by introducing knowledge, whereas there is still much room in knowledge-grounded conversation modeling for future work.",
"This work was jointly supported by the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096), and the National Key R&D Program of China (Grant No. 2018YFC0830200).",
"We thank THUNUS NExT Joint-Lab for the support."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other"
] |
[
"Sarcasm is important to sentiment analysis on social media.",
"Sarcasm Target Identification (STI) deserves further study to understand sarcasm in depth.",
"However, text lacking context or missing sarcasm target makes target identification very difficult.",
"In this paper, we introduce multimodality to STI and present Multimodal Sarcasm Target Identification (MSTI) task.",
"We propose a novel multi-scale cross-modality model that can simultaneously perform textual target labeling and visual target detection.",
"In the model, we extract multi-scale visual features to enrich spatial information for different sized visual sarcasm targets.",
"We design a set of convolution networks to unify multi-scale visual features with textual features for cross-modal attention learning, and correspondingly a set of transposed convolution networks to restore multi-scale visual information.",
"The results show that visual clues can improve the performance of TSTI by a large margin, and VSTI achieves good accuracy.",
"Sarcasm is a type of sentiment in which people express their negative feelings using positive or intensified positive words.",
"It has the power to disguise the hostility of the speaker (Dews and Winner, 1995), thereby enhancing the effect of mockery or humor on the listener.",
"Sarcasm is prevalent on today's social media platforms such as Twitter, and automatic Sarcasm Target Identification (STI) bears great significance in customer service, opinion mining, and online harassment detection.",
"Previous works about STI have focused on text modality and proposed some methods such as rule-based, statistical classifier-based (Joshi et al., 2018), and deep learning models with socio-linguistic features (Patro et al., 2019).",
"For example, in Figure",
"1(a), we are not sure whether the context conveys positive or negative emotions if we only see the text This guy definitely deserves $15 an hour!.",
"However, the negative information comes from the image.",
"When observing a lazy guy in the picture lying on the chair, we can easily determine that the lazy guy is a sarcasm target and label the text This guy as a sarcasm target (ST).",
"Moreover, the sarcasm target sometimes does not appear explicitly in the text, which is marked as OUTSIDE '.",
"In ALTA Shared Task (Molla and Joshi, 2019), OUTSIDE ' cases account for over 30% of the data.",
"For example, in the tweet of Figure",
"1(b), the author teased that the skirt was too long, similar to a bed sheet; therefore, the sarcasm target should be the long skirt.",
"However, no sarcasm target appears in the text but we can label the long skirt as an ST with a blue bounding box in the picture.",
"The above examples illustrate the necessity of combining images for STI.",
"In this paper, we introduce a novel task called Multimodal Sarcasm Target Identification (MSTI) on social media data.",
"The MSTI task is to extract sarcasm targets (STs) from both texts and images 8164 in tweets.",
"The textual ST is a word or a phrase, and the visual ST is an object labeled by a bounding box, as shown in Figure 1. The challenge of the MSTI task is not only to extract both textual and visual features but also to leverage cross-modality interaction and semantic learning to improve the performance of STI.",
"The contributions of this paper can be summarized as follows: To the best of our knowledge, this is the first attempt to perform the MSTI task.",
"We build an MSTI dataset and propose a novel cross-modality MSTI framework.",
"Textual and multi-scale visual features are fused in a cross-modality encoder via convolution and transposed convolution.",
"Our model performs textual and visual tasks simultaneously in an end-to-end manner, and it can leverage textual and visual contexts to enhance textual and visual representations for MSTI.",
"We design a cross-modality attention visualization method in terms of text-to-image and image-to-text to illustrate the mutual effects between textual and visual modalities.",
"These results show the image regions and words extracted through cross-modality attention are the keys to sarcasm and explain the improved performance of TSTI and VSTI by cross-modality learning.",
"The comprehensive experimental results are presented.",
"The results indicate that the images in tweets improve the performance of TSTI by a large margin.",
"Comparisons with textual, object detection, and pretrained multimodal baselines show the advanced performance of our model.",
"The existing research on sarcasm analysis mainly focuses on sarcasm detection (SD) and sarcasm target identification (STI).",
"We begin the literature review with textual sarcasm.",
"Textual Sarcasm Detection.",
"Traditional sarcasm detection is defined as a binary classification of sarcastic or non-sarcastic sentiments in text (Guo et al., 2021).",
"Earlier approaches (Joshi et al., 2017) were based on sarcastic pattern rules (Riloff et al., 2013) or statistical models such as SVM (Joshi et al., 2015) or logistic regression (Bamman and Smith, 2015).",
"Recently, deep learning techniques have gained popularity.",
"Word embeddings and LSTM/CNN model were employed in Joshi et al. (2016); Zhang et al. (2016).",
"Furthermore, Peled and Reichart (2017) presented a neural machine translation framework and Tay et al. (2018) proposed an attention-based neural model to interpret and reason with sarcasm.",
"Xiong et al. (2019) proposed a self-matching network to capture incongruity information by exploring word-to-word interactions.",
"Agrawal et al. (2020) formulated sarcasm detection as a sequence classification problem by leveraging the natural shifts in various emotions over the course of a piece of text.",
"Babanejad et al. (2020) extended the architecture of BERT by incorporating both affective and contextual feature embeddings.",
"Guo et al. (2021) proposed a latent-optimized adversarial neural transfer model for cross-domain sarcasm detection.",
"Textual Sarcasm Target Identification.",
"To deepen the field of sarcasm analysis, STI has been well studied recently (Patro et al., 2019; Parameswaran et al., 2021).",
"The goal of STI is to label the subject of mockery or ridicule within sarcastic texts.",
"Patro et al. (2019) showed that the Exact Match (EM) accuracy on tweets is approximately 30%.",
"Joshi et al. (2018) introduced the STI problem and summarized the 2019 ALTA shared task regarding STI (Molla and Joshi, 2019).",
"The evaluation metrics such as EM accuracy and F1 score were presented.",
"Patro et al. (2019) presented a deep learning framework augmented with socio-linguistic features to detect sarcasm targets.",
"Parameswaran et al. (2019) employed an ensemble of classifiers such as SVM, logistic and linear classifiers to classify OUTSIDE ' and NOT OUTSIDE ', then used a rule-based approach to extract the target sarcasm words from the NOT OUTSIDE ' samples.",
"Multimodal Sarcasm Detection.",
"Benefiting from images, multimodal sarcasm detection (MSD) has gained increasing research attention.",
"Schifanella et al. (2016) first tackled this task as a multimodal classification problem.",
"They concatenated the visual and textual features and employed SVM or a neural network consisting of fully connected and softmax layers, to detect sarcasm.",
"Cai et al. (2019) extended the input modalities to a triplet of text, image, and image attributes, and they proposed a hierarchical fusion model for sarcasm detection.",
"Castro et al. (2019) proposed a video-level multimodal sarcasm detection task.",
"Features were ob-8165 This guy definitely deservces $ 15 an hour !",
"tained from three modalities, i.e., text, speech, and video, and an SVM classifier with RBF kernel was employed.",
"Sangwan et al. (2020) presented an RNN-based model and gating mechanism, which attempted to decide the weight of image modality regarding textual modality.",
"Pan et al. (2020) proposed a BERT-based model that concentrated on both intraand inter-modality incongruity for multimodal sarcasm detection.",
"Xu et al. (2020) constructed the decomposition network to model the discrepancy between image and text and the relation network to model the semantic association in cross-modality context.",
"Figure 2 illustrates the overall architecture of our MSTI model.",
"The model mainly consists of five components: (1) Backbone to MCE converter (B2M), (2) Multi-scale Cross-modality Encoder (MCE), (3) MCE to Neck network converter (M2N), (4) Textual Sarcasm Target Identification (TSTI), and (5) Visual Sarcasm Target Identification (VSTI).",
"We first extract textual and visual features separately.",
"The multi-scale visual features of the last three blocks of the pretrained backbone network are unified to the same dimension by B2M and input to the MCE together with textual features.",
"The MCE outputs cross-modality representations, where the parts corresponding to textual features are fed into BiLSTM-CRF to label the sequence for TSTI and the parts corresponding to multi-scale visual features restored by M2N are connected to the neck and head networks to predict bounding boxes for VSTI.",
"Textual Representation : We obtain contextual word embeddings from pretrained language models (LM) such as BERT (Devlin et al., 2019) to extract linguistic features.",
"Let S = ([ CLS ] , t 1 , ..., t n , [ SEP ]) be the token sequence and e = ( e 1 , ..., e n ) be the contextual word embeddings generated by a pretrained LM, where e i R d .",
"As shown in Figure 2, the contextual word embeddings e represent the textual input of the next module MCE.",
"such as ResNet (He et al., 2016), VGG (Simonyan and Zisserman, 2014), and CSPDarkNet (Wang et al., 2020).",
"To improve the detection performance of sarcasm targets with various sizes, our model performs VSTI using multi-scale visual features.",
"The multi-scale outputs at the last three blocks of the backbone are denoted as v 1 , v 2 , and v 3 , shown in Figure 2, for the later use of the Neck network.",
"The dimensions of v 1 , v 2 , and v 3 are d s 1 d s 1 d 1 , d s 2 d s 2 d 2 , and d s 3 d s 3 d 3 , respectively, where d s i d s i represents image scale and d i represents feature map, i = { 1 , 2 , 3 } .",
"The B2M converter aims to unify the dimensions of three visual features with the dimension d of textual feature and lower the scales of three visual features to reduce the computation of the MCE.",
"The B2M has three parts B 1 , B 2 , and B 3 corresponding to the visual features v 1 , v 2 , and v 3 .",
"Each part consists of convolutional layers followed by Rectified Linear Unit (ReLU) and max pooling layer.",
"Table 1 shows the architecture of B2M.",
"The input dimensions of B 1 , B 2 , and B 3 are 19 19 1024 , 38 38 512 , and 76 76 256 , respectively, when the backbone is set to CSPDarkNet53 (Bochkovskiy et al., 2020).",
"We denote the outputs of B 1 , B 2 , and B 3 as v B 1 , v B 2 , and v B 3 , respectively.",
"According to the computation of the convolutional layer, the output scale is (cid:4) I +2 P K S (cid:5) + 1 , where I is the dimension of input scale, P is padding, S is stride, and K is kernel.",
"The Conv generates feature maps of d , which is the same as the dimension of the word embeddings.",
"Then, we can obtain all v B 1 , v B 2 , and v B 3 with size 5 5 d .",
"Finally, we flatten the shape size 5 5 to 25 visual tokens { v p,qB i } (1 p, q 5) to generate the visual inputs of the MCE.",
"The MCE is based on the Transformer encoder architecture presented in Vaswani et al. (2017) and shown in the left of Figure 2. The Transformer encoder has a multi-head self-attention sub-layer and a fully connected feed-forward sub-layer.",
"A residual connection and layer normalization are employed around two sub-layers.",
"The Transformer encoder adopts scaled dot-product attention, which is defined as follows: Attention ( Q, K, V ) = softmax ( QKT d k ) V, (1) B2M B 1 Conv [ K = 3 3 ,P = 1 ,S = 2 ] MaxPooling [ 2 2 ] B 2 Conv (cid:20) K = 3 3 ,P = 1 ,S = 2 K = 3 3 ,P = 1 ,S = 2 (cid:21) MaxPooling [ 2 2 ] B 3 Conv (cid:20) K = 5 5 ,P = 2 ,S = 4 K = 3 3 ,P = 1 ,S = 2 (cid:21) MaxPooling [ 2 2 ] M2N C 1 UpSampling [ 2 2 ] ConvT [ K = 3 3 ,P = 1 ,S = 2 ] C 2 UpSampling [ 2 2 ] ConvT (cid:20) K = 3 3 ,P = 1 ,S = 2 K = 3 3 ,P = 1 ,S = 2 (cid:21) C 3 UpSampling [ 2 2 ] ConvT (cid:20) K = 3 3 ,P = 1 ,S = 2 K = 5 5 ,P = 2 ,S = 4 (cid:21) Table 1: Architecture of B2M and M2N converters.",
"where matrices Q , K , and V consist of queries, keys, and values, respectively, and d k is the dimension of keys.",
"In our model, we concatenate the textual and visual features into a sequence G , G = ( e 1 , ..., e n (cid:124) (cid:123)(cid:122) (cid:125) n , v 1 , 1 B 1 , . . . , v 5 , 5 B 1 (cid:124) (cid:123)(cid:122) (cid:125) 25=5 5 , v 1 , 1 B 2 , ..., v 5 , 5 B 2 (cid:124) (cid:123)(cid:122) (cid:125) 25=5 5 , v 1 , 1 B 3 , ..., v 5 , 5 B 3 (cid:124) (cid:123)(cid:122) (cid:125) 25=5 5 ) .",
"We feed G into the MCE, and therefore Q = K = V = G (cid:62) .",
"The outputs of the MCE are divided into two parts: the corresponding textual part e C is used for TSTI, and the corresponding multi-scale visual parts v C 1 , v C 2 , and v C 3 are used for VSTI.",
"The M2N converter is an inverse procedure of the B2M converter.",
"The dimensions of the output v N i of the M2N converter are the same as those of the input v i of the B2M converter, where i = { 1 , 2 , 3 } .",
"The M2N converter has three parts C 1 , C 2 , and C 3 , corresponding to B 1 , B 2 , and B 3 of the B2M converter, respectively.",
"The architecture of M2N is shown in Table 1. Each part consists of transposed convolution (ConvT) (Dumoulin and Visin, 2016) followed by ReLU and upsampling layer.",
"The ConvT is considered as the reverse operation of convolution.",
"If the ConvT's kernel size, padding size, and stride are the same as those carried out on the Conv layer, then the ConvT generates the same spatial dimension as that of the Conv's input.",
"Upsampling reverses the pooling operation by the nearest-neighbor interpolation algorithm.",
"We use the BIO (short for Beginning, Inside, and Outside) schema (Ramshaw and Marcus, 1995)",
"to label textual sarcasm targets.",
"The B-ST' tag indicates the beginning of an ST and the I-ST' tag indicates the inside of an ST. The O' tag indicates that a token does not belong to any ST. We employ a classical sequence tagging model, i.e., BiLSTM-CRF (Huang et al., 2015), to label the textual STs.",
"The bidirectional LSTM (BiLSTM) first processes each sentence token-by-token and produces forward and backward hidden vectors for each token.",
"Then the concatenation of the two hidden vectors is input to a Conditional Random Fields (CRFs) layer (Lafferty et al., 2001).",
"For a sequence of tags y = { y 1 , . . . , y n } , the probability of the label sequence y is defined as follows: p ( y | x ) = e s ( x,y ) (cid:80) y (cid:48) Y e s ( x,y (cid:48) ) , (3) where Y is all possible tag sequences for the sentence x and s ( x, y ) are feature functions modeling transitions and emissions.",
"The computation details can be found in Lample et al. (2016).",
"The objective of labeling ST is to minimize the negative log-likelihood over the training data D t = { ( x ( i ) , y ( i ) ) } Mi =1 : LTSTI = M (cid:88) i =1 log ( p ( y ( i ) | x ( i ) )) .",
"3.7 Visual Sarcasm Target Identification There are two kinds of object detectors, one-stage and two-stage.",
"One-stage object detector such as YOLO (Redmon et al., 2016) is faster and simpler.",
"In this paper, we adopt YOLOv4's Neck and Head networks (Bochkovskiy et al., 2020) to perform VSTI.",
"The multi-scale cross-modality features v N 1 , v N 2 , and v N 3 are connected to the Neck network, which consists of Spatial Pyramid Pooling (SPP) (He et al., 2015) and Path Aggregation Network (PANet) (Liu et al., 2018).",
"The Neck network is to increase the receptive field and preserve spatial information.",
"The Head network is used for predicting bounding boxes at 3 different scales.",
"The output tensor of the Head network of YOLOv4 is d s i d s i [3 (4 + 1 + C )] at each scale, predicting 3 boxes per grid cell where each box has 4 bounding offsets ( t x , t y , t w , t h ) , 1 objectness score, and C class scores.",
"The detailed computation of bounding offsets can be found in Redmon and Farhadi (2018).",
"Each grid cell predicts the object probability and C class probabilities.",
"In our model, since there is 1 sarcasm object class, we ablate C class scores.",
"As in YOLO, L b is based on the bounding box priors that are assigned to ground truth objects and computed by mean squared error (MSE).",
"L o is computed by binary cross-entropy (BCE) for classifying the bounding box priors as object or non-object.",
"Finally, combining the TSTI and VSTI tasks, the objective function for MSTI is as follows: LMSTI = LTSTI + LV STI .",
"In this paper, we build an MSTI dataset for pub-lic research 1 .",
"We label textual and visual STs on the dataset collected by Cai et al. (2019) for multimodal sarcasm detection.",
"Each sample is manually annotated by three persons based on their common sense.",
"The agreement between the annotators is measured using a percentage of overlapping choices between the annotators, i.e., above one word overlapping for text phrases and above 50% intersection-over-union (IoU) overlapping for image regions.",
"We ensure the quality of ground truth by keeping the consistency of all annotator's 1 https://github.com/wjq-learning/MSTI 8168 #Tweet #Textual ST #Visual ST Train 3,546 2,501 2,172 Dev 727 542 543 Test 742 573 524 Table 2: Statistics of the MSTI dataset.",
"opinions.",
"The samples with annotations that the three annotators agree on are put into the dataset otherwise removed, making the annotations valid.",
"Figure 3 shows three examples.",
"The statistics of the MSTI dataset are shown in Table 2. The MSTI dataset is split into 3,546/727/742 as Train/Dev/Test in experiments.",
"The number of textual and visual sarcasm targets are also listed.",
"Table 3 shows the proportion of multimodal sarcasm target types, i.e., STs appear both in the text and image, ST only appears in the text, and ST only appears in the image.",
"Table 4 shows the number and percentage of different sized visual STs.",
"We categorize the size of ST as small (area occupation<1.5%), medium (1.5%<area oc-cupation<10%), and large (area occupation>10%).",
"We use CSPDarkNet53 as backbone and BERT-Base ( d =768) or BERT-Large ( d =1024) as LM.",
"All images are shaped to a size of 608 608 .",
"In the VSTI, we use the default settings of YOLOv4, e.g., IoU threshold and object confidence threshold.",
"The weights of neural network are randomly initialized except the pretrained BERT, backbone and Neck networks.",
"We train the model using the Adam (Kingma and Ba, 2014) optimizer with default settings and the learning rate is set to 1e-4.",
"All pretrained models are finetuned with a learning rate of 1e-5.",
"The mini-batch size is set to 8 and dropout rate is 0.5.",
"We use two-layer BiLSTM with 768 hidden states and Transformer encoder with 12 heads.",
"2018) and F1 score (Molla and Joshi, 2019) as evaluation metrics for TSTI.",
"The EM accuracy is computed as the number of samples that strictly match the boundaries of gold annotations divided by the total number of samples.",
"The F1 score = 2/(1/P+1/R) is calculated from precision P = TP/(TP+FP) and recall R = TP/(TP+FN), where TP is correctly predicted target word, FP is incorrectly predicted target word, and FN is target word but not predicted.",
"Average Precision (AP) is widely used to evaluate object detection (Lin et al., 2014).",
"The COCO-style AP, AP 50 , and AP 75 are evaluated for VSTI.",
"The COCO-style AP averages AP at IoU=[0.5:0.05:0.95].",
"AP 50 corresponds to AP at IoU=0.5 and AP 75 corresponds to AP at IoU=0.75.",
"IoU measures the amount of overlap between two bounding boxes and here is used as a criterion that determines if a prediction matches ground truth.",
"We train the model on one machine with an NVIDIA RTX 3090 (GPU) and Intel Core i9 10900K (CPU).",
"Our model takes approximately 10 hours for 60 epochs in training.",
"Our model is compared with text baselines such as rule-based & statistical extractors, socio-linguistic features, and BERT, object detection baselines such as Mask R-CNN (He et al., 2020) and YOLOv4, and pretrained multimodal baselines such as VL-BERT (Su et al., 2019) and Unicoder-VL (Li et al., 2020).",
"Rule-based & Statistical Extractors.",
"Joshi et al. (2018) introduced STI and proposed a method based on rules and statistical classification extractors.",
"The sarcasm target was determined based on the results of two extractors.",
"The configuration of R2 and Hybrid AND' performs the best on the MSTI dataset.",
"We test the source code 2 as a baseline of TSTI.",
"Socio-linguistic Features.",
"Patro et al. (2019) presented a deep learning framework augmented with socio-linguistic features to detect textual ST. Sociolinguistic features include the distribution of location (LOC) and organization (ORG) named entities, the distribution of POS tags, and the distribution of LIWC and Empath (Fast et al., 2016) categories.",
"We test the source code 3 as a baseline of TSTI.",
"BERT.",
"We follow the sequence tagging task of Devlin et al. (2019) as a baseline to perform TSTI.",
"The BERT-based model followed by linear and softmax layers is tested to tag textual ST. Object Detection Models.",
"We treat VSTI as a single object detection problem and test two state-of-the-art models, i.e., Mask R-CNN 4 and YOLOv4 5 .",
"We train the models on the MSTI dataset with the default values of parameters in the repository.",
"Pretrained Multimodal Models.",
"Recently, pretrained multimodal models such as VL-BERT, Unicoder-VL, and UNITER (Chen et al., 2020), have been proposed.",
"These models use regions-of-interest (RoIs) produced by object detectors such as Faster R-CNN (Ren et al., 2016) as visual tokens.",
"The inputs of pretrained multimodal models are RoI features and token embeddings; In this paper, we design an MSTI baseline approach based on pretrained multimodal models as follows: Labeling of the textual STs is based on the outputs of token embeddings, the same as in our TSTI method; The VSTI is performed by a binary classification on the outputs of RoI features followed by linear+softmax layers, and it is trained by the RoIs, which are considered as visual STs when the IoU with gold ST is larger than an optimal value of 0.7, otherwise they are considered as non-STs.",
"We finetune the IoU threshold for non-maximum suppression (NMS) to ignore overlapping RoIs and find that IoU NMS = 0 .",
"2 is optimal.",
"Table 5 shows the performance of our model compared with text, object detection, and pretrained multimodal baselines on the Dev and Test sets.",
"The results show that the BERT-based sequence tagging models are better than the previous works of STI (Joshi et al., 2018; Patro et al., 2019).",
"Fusing visual clues, our model outperforms BERT-based textual models on average 5.3% in F1 score and 4.4% in EM accuracy.",
"The object detection baselines such as Mask R-CNN and YOLOv4 which are directly trained by sarcastic objects are better than the pretrained multimodal baselines with RoIs detected by a traditional object detector, obtaining an increase of approximately 2% in AP metrics.",
"We test state-of-the-art backbones such as ResNet151 and VGG19, in which scale dimensions of the last three blocks are 19 19 , 38 38 , and 76 76 , respectively, the same as in CSPDarkNet53.",
"Therefore, the B2M in Table 1 can be directly used for ResNet151 and VGG19, and the M2N works if the dimensions of the output feature maps of { C 1 , C 2 , C 3 } are set to {2048, 1024, 512} for ResNet151 and {512, 512, 256} for VGG19, respectively.",
"The results show that CSPDarkNet53 achieves the best performance.",
"In addition, Table 6 reports APs (namely, APS , APM , and APL ) by our best model based on small, medium, and large 8170 EM F1 AP AP 50 AP 75 Our model 37.2 47.9 32.6 51.9 34.6 w/o text -27.4 (-5.2) 44.6 (-7.3) 28.1 (-6.5) w/o image 33.1 (-4.1) 42.8 (-5.1) -w/ text and w/o TSTI loss -29.7 (-2.9) 49.1 (-2.8) 31.6 (-3.0) w/ image and w/o VSTI loss 34.6 (-2.6) 43.9 (-4.0) -Table 7: Ablation results of our model on the Test set.",
"We ablate text (w/o text) or image (w/o image) from our multimodal model.",
"Table 7 shows the results of our model (backbone=CSPDarkNet53, LM=BERT-Large).",
"The performance drops by 5.1% in F1 score and 4.1% in EM accuracy when ablating images, indicating that visual clues are very useful for STI.",
"In addition, we ablate TSTI training (w/ text and w/o TSTI loss) or VSTI training (w/ image and w/o VSTI loss).",
"We observe that by only adding text but not training the textual task, our model can greatly improve the VSTI performance, i.e., from 44.6% to 49.1% in AP 50 .",
"However, by only adding image but not training the image task, our model obtains a small increase of 1.5% EM accuracy and 1.1% F1 score for TSTI.",
"These results indicate that texts has more explicit sarcasm information than images and sarcastic message likely comes more from texts, which are consistent with common sense.",
"We visualize the attentions of the MCE in terms of image-to-text and text-to-image in order to illustrate the sarcasm information added from anther modality.",
"The input of the MCE is composed of textual and multi-scale visual embeddings.",
"We abbreviate G , previously defined in Eq.",
"(2), as G = ( e, v B ) where e = ( e 1 , ..., e n ) and v B = ( v 1 , 1 B 1 , . . . , v 5 , 5 B 1 , v 1 , 1 B 2 , ..., v 5 , 5 B 2 , v 1 , 1 B 3 , ..., v 5 , 5 B 3 ) .",
"The attention weight matrix of the h -th head can be divided to four submatrics and denoted as follows: A h = (cid:18) A h ( e, e ) A h ( e, v B ) A h ( v B , e ) A h ( v B , v B ) (cid:19) .",
"Thus, the scaled dot-product attention of Eq.",
"(1) also can be written as A h G (cid:62) .",
"We define the computation of image-to-text and text-to-image attentions as follows: Text-to-image Attentions.",
"The goal of text-to-image attention is to quantify the effect of text on each image block.",
"We compute the average",
"sum of A h ( v B , e ) across all words, H heads, and 3 scales, then obtain the text-to-image attention weights on the image block with the coordinate ( p, q ) as follows: w v p,q = 1 3 H 3 (cid:88) i =1 H (cid:88) h =1 n (cid:88) k =1 A h ( v p,qB i , e k ) .",
"Image-to-text Attentions.",
"The text-to-image attention aims to quantify the effect of image on each word.",
"We compute the average sum of A h ( e, v B ) across 25 image blocks with 3 scales and H heads, then obtain the image-to-text attention weights as follows: w e k = 1 3 H 3 (cid:88) i =1 H (cid:88) h =1 5 (cid:88) p =1 5 (cid:88) q =1 A h ( e k , v p,qB i ) , (9) where k = 1 , . . . , n .",
"Eq.",
"(9).",
"The attention maps show some meaningful cues discovered by the MCE regarding sarcasm.",
"The red color denotes the highest weights.",
"We scale up the text-to-image attention map to image size using interpolation.",
"As expected, the text-to-image attentions in the middle columns of Figure 4 focus on the regions that are highly relevant to sarcasm targets, such as door in",
"(a), chicken nuggets in",
"(b), and orange peel in",
"(c).",
"Surprisingly, the image-to-text attentions in the right columns of Figure 4 point out the key words to well understand sarcasm.",
"Using these red colored words, the tweet authors express their opinions:",
"(a) Can the door through'?",
"(b) Are the chicken nuggets healthy'?",
"(c) Is the train clean' or is train home pleasant'?",
"In this paper, we introduce a new task for identifying both textual and visual sarcasm targets.",
"This work provides a good attempt to detect sarcasm targets on images.",
"Our model integrates multiple components such as sequence labeling, multi-scale cross-modality learning, and object detection.",
"The experimental results not only illustrate that visual clues can improve the performance of TSTI by a large margin, approximately 5% in F1 score, but also prove that it is feasible to detect sarcasm targets in images, obtaining a good accuracy of 51.9% in AP 50 .",
"This work was supported in part by Zhejiang Provincial Natural Science Foundation of China under Grant No.",
"LGN22F020002, the National Natural Science Foundation of China (NSFC) under Grant No. 62072402, and Key Research and Development Program of Zhejiang Province under Grant No. 2022C03037."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"method",
"objective",
"objective",
"objective",
"result",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"other",
"other"
] |
[
"Learning to capture text-table alignment is essential for tasks like text-to-SQL.",
"A model needs to correctly recognize natural language references to columns and values and to ground them in the given database schema.",
"In this paper, we present a novel weakly supervised Stru ctureG rounded pretraining framework (STRUG) for text-to-SQL that can effectively learn to capture text-table alignment based on a parallel text-table corpus.",
"We identify a set of novel pretraining tasks: column grounding, value grounding and column-value mapping, and leverage them to pretrain a text-table encoder.",
"Additionally, to evaluate different methods under more realistic text-table alignment settings, we create a new evaluation set Spider-Realistic based on Spider dev set with explicit mentions of column names removed, and adopt eight existing text-to-SQL datasets for cross-database evaluation.",
"STRUG brings significant improvement over BERTLARGE in all settings.",
"Compared with existing pretraining methods such as GRAPPA, STRUG achieves similar performance on Spider, and outperforms all baselines on more realistic sets.",
"All the code and data used in this work is public available at https://aka.ms/ strug .",
"Semantic parsing is the task of mapping a natural language (NL) utterance to a machine-understandable representation such as lambda calculus, abstract meaning representation, or a structured query language (e.g., SQL).",
"In this paper, we focus on the task of translating NL questions to executable SQL queries (text-to-SQL).",
"This is a fundamental task for building natural language interfaces for databases, which can enable non-expert users to effortlessly query databases (Androutsopoulos et al., 1995; Li and Jagadish, 2014a).",
"One of the key challenges in text-to-SQL is text-table alignment, that is, to correctly recognize natural language references to columns and values and to ground them in the given database schema .",
"Consider the example in the top half of Fig.",
"1. A model needs to first identify the column mentions total credits , department , and value mention History , and then ground them to the given schema.",
"This is challenging for three reasons.",
"First, the model needs to jointly understand the NL utterance and the database schema, as the user may refer to a column using various expressions which usually differ from the original column name.",
"Second, the model needs to be able to generalize to new database schemas and referential language that is not seen in training.",
"Finally, in the case that accessing cell values is not possible, the model still needs to identify potential value mentions and link them to the correct columns without exhaustively searching and matching over the database.",
"On the other hand, text-table alignment naturally exists in parallel text-table corpora, e.g., web tables with context (Lehmberg et al., 2016), table-to-text generation datasets (Parikh et al., 2020; Chen et al., 2020a), table-based question answering datasets (Pasupat and Liang, 2015; Chen et al., 2020b).",
"Such datasets can be collected from web pages, documents, etc., and requires much less human effort to create compared with text-to-SQL datasets.",
"The bottom half of Fig. 1 gives an example of such an alignment dataset.",
"There are three value mentions 11417 , Pune Junction and Nagpur Jnction , which can be grounded to the train number , departure station and arrival station columns respectively.",
"Such alignment information can be easily obtained by leveraging the table contents or using some human annotation.",
"In this work, we aim to incorporate the text-table alignment knowledge contained in a parallel corpus via pretraining and use it to help the downstream text-to-SQL task.",
"We present a novel weakly supervised structure-grounded pretraining framework (STRUG) for text-to-SQL.",
"We design a set of prediction tasks and optimize them leveraging a parallel corpus containing both NL sentences and tabular data to encourage the encoded representation to capture information required to support tasks that require table grounding.",
"More specifically, we identify three critical tasks for aligning text with table: column grounding, value grounding and column-value mapping (examples shown in Fig. 2).",
"We re-purpose an existing large-scale table-to-text generation dataset ToTTo (Parikh et al., 2020) for pretraining and gain labels for the three tasks via weak supervision.",
"We experiment under two settings, with or without human assistance: (1) human assisted setting , using ToTTo's revised descriptions and cell annotations; (2) automatic setting , using the raw sentences and inferring the cell correspondences via string matching with the table contents.",
"As pointed out by Suhr et al. (2020), existing text-to-SQL benchmarks like Spider (Yu et al., 2018b) render the text-table alignment challenge easier than expected by explicitly mentioning exact column names in the NL utterances.",
"Contrast this to more realistic settings where users may refer to the columns using a variety of expressions.",
"Suhr et al. (2020) propose a new cross-database setting that uses Spider for training and includes eight other single-domain text-to-SQL datasets for Figure 2: Overview of our model architecture and three pretraining objectives.",
"evaluation.",
"In addition to adopting their setting, we create a new evaluation set called Spider-Realistic from the original Spider dev set, by removing explicit mentions of column names from an utterance.",
"We pretrain STRUG using 120k text-table pairs from ToTTo.",
"Experiments show that our structure-grounded pretraining objectives are very efficient and usually converge with around 5 epochs in less than 4 hours.",
"This dramatically reduces the pretraining cost compared to previous pretraining methods (Herzig et al., 2020; Yin et al., 2020).",
"We adopt the same model architecture as BERT (De-vlin et al., 2019), with simple classification layers on top for pretraining.",
"For downstream tasks, STRUG can be used as a text-table encoder and easily integrated with any existing state-of-the-art model.",
"We conduct extensive experiments and show that: (1) Combined with state-of-the-art text-to-SQL model RAT-SQL (Wang et al., 2020), using STRUG as encoder significantly outperforms directly adopting pretrained BERTLARGE (RAT-SQL's default encoder) and performs on par with other text-table pretraining models like GRAPPA (Yu et al., 2020) on the widely used Spider benchmark.",
"(2) On more realistic evaluation settings, including Spider-Realistic and the Suhr et al. (2020) datasets, our method outperforms all baselines.",
"This demonstrates the superiority of our pretraining framework in solving the text-table alignment challenge, and its usefulness in practice.",
"(3) STRUG also helps reduce the need for large amount of costly supervised training data.",
"We experiment with the WikiSQL benchmark (Zhong et al., 2017) by limiting training data size, and show that our pretraining method can boost the model performance by a large margin and consistently outperforms existing pretraining methods.",
"Cross-Database Text-to-SQL.",
"Remarkable progress has been made in text-to-SQL over the past few years.",
"With sufficient in-domain training data, existing models already achieve over 80% exact matching accuracy (Finegan-Dollak et al., 2018; Wang et al., 2018) on single-domain benchmarks like ATIS (Hemphill et al., 1990; Dahl et al., 1994) and GeoQuery (Zelle and Mooney, 1996).",
"However, annotating NL questions with SQL queries is expensive making it cost-prohibitive to collect training examples for all possible databases.",
"A model that can generalize across domains and databases is desired.",
"In light of this, Yu et al. (2018b) present Spider, a cross-database text-to-SQL benchmark that trains and evaluates a system using different databases.",
"More recently, Suhr et al. (2020) provide a holistic analysis of the challenges introduced in cross-database text-to-SQL and propose to include single-domain datasets in evaluation.",
"Their study uncovers the limitations of current text-to-SQL models, and demonstrates the need for models that can better handle the generalization challenges.",
"Pretraining for Text-Table Data.",
"Inspired by the success of pretrained language models, some recent work has tried to apply similar pretraining objectives to text-table data.",
"TaBERT (Yin et al., 2020) and TAPAS (Herzig et al., 2020) jointly learn text-table representations by leveraging a large amount of web tables and their textual context.",
"They flatten the tables and use special embeddings to model the structure information.",
"A masked language model (MLM) objective is then used to predict the masked tokens in the text-table data.",
"MLM is good at modeling the contextualized semantic representations of a token, but is weak at capturing the alignment between a pair of sequences (e.g., text-table).",
"More recently, GRAPPA (Yu et al., 2020) explores a different direction for pretraining which shares some similarity with existing work on data augmentation for semantic parsing.",
"GRAPPA first constructs synthetic question-SQL pairs using templates (a synchronous context free grammar) induced from existing text-to-SQL datasets, a SQL semantic prediction objective is then used to learn compositional inductive bias from the synthetic data.",
"However, as the synthetic data is generated using templates, and the column names and values are directly filled in the questions, it has the same problem as existing text-to-SQL datasets that eases the text-table alignment challenge.",
"In con-strast, STRUG aims to directly learn the text-table alignment knowledge from parallel text-table corpora via structure-grounded pretraining objectives.",
"We also note that existing pretraining methods and STRUG can be complementary and combined together in the future.",
"Structure Grounding in Text-to-SQL.",
"Structure grounding has been proven to be crucial for text-to-SQL, where a model needs to correctly identify column and value mentions in an NL utterance and link them to the given database schema (Guo et al., 2019; Bogin et al., 2019; Wang et al., 2020; Lei et al., 2020).",
"Most existing text-to-SQL systems have specially designed components for structure grounding, which is also referred to as schema linking.",
"For example, Guo et al. (2019); Yu et al. (2018a) explore using simple heuristics like string matching for schema linking, and use the linking results as direct hints to their systems.",
"However, such heuristics may not generalize well in real world scenarios where there are varied ways to refer to a column, which usually differ from the original column name.",
"More recently, Shi et al. (2020) and Lei et al. (2020) take a step forward and manually annotate WikiTableQuestions (Pasupat and Liang, 2015) and Spider with fine-grained alignment labels for supervised training (together with the text-to-SQL objective), which brings significant improvements.",
"The main drawback of these models is that they are limited to learn the alignment knowledge from a relatively small training corpus, and cannot generalize well in a cross-domain setting.",
"Moreover, SQL annotations and fine-grained alignment labels are both expensive to get manually.",
"In contrast, this paper aims to re-purpose an existing parallel text-table corpus for pretraining models to learn structure grounding, where we generate alignment labels at large scale with low or no cost.",
"One of the critical generalization challenges in cross-database text-to-SQL is text-table alignment, i.e., a model needs to understand NL utterances and database schemas unseen in training, including value mentions and novel columns, and to correctly map between them.",
"Similar generalization challenges have been studied for a long time in the NLP field.",
"Recently, pretrained language models (Devlin et al., 2019; Liu et al., 2019; Lewis et al., Figure 3: Illustration of the parallel corpus ToTTo (Parikh et al., 2020) and our two weakly supervised pretraining settings.",
"2020) have achieved great success in tackling the challenges by learning contextualized representations of words from a large text corpus.",
"Inspired by this, in this work we aim to develop a pretraining method that can directly learn the text-table alignment knowledge from a large parallel text-table corpus.",
"Unlike previous text-table pretraining works (Herzig et al., 2020; Yin et al., 2020) that optimize unsupervised objectives like MLM during pretraining, we carefully design three structure-grounded tasks: column grounding, value grounding and column-value mapping.",
"These tasks are related to text-to-SQL and can directly capture the text-table alignment during pretraining.",
"As a result, the learned alignment knowledge can be effectively transferred to the downstream task and improve the final performance.",
"We use the same model architecture as BERT, and add simple classification layers on top for the three structure-grounded tasks.",
"For downstream tasks, our model can be easily integrated into existing models as text-table encoder.",
"Following previous work (Hwang et al., 2019; Wang et al., 2020; Guo et al., 2019), we linearize the input by concatenating the NL utterance and column headers, using <sep> token as a separator.",
"Formally, given a pair of NL utterance { x i } and table with a list of column headers (in case there are multiple tables like in databases, we concatenate all the column names together) { c j } , we first obtain the contextualized representation x i of each token in the utterance and c j for each column using the last layer output of the BERT encoder.",
"Here each column header c j may contain multiple tokens c j, 0 , . . . , c j, | c j | .",
"We obtain a single vector representation for each column using column pooling.",
"More specifically, we take the output of the first and last token of the header, and calculate the column representation as c j = ( c j, 0 + c j, | c j | ) / 2 .",
"{ x i } and { c j } are then used to compute losses for the three tasks.",
"An overview of our model architecture and pretraining objectives are shown in Fig.",
"2. Column grounding.",
"An important task in text-to-SQL is to identify grounded columns from the schema and use them for the generated SQL query.",
"With a parallel text-table corpus, this is similar to selecting the columns that are mentioned in the associated NL sentence.",
"This task requires a model to understand the semantic meaning of a column based on its header alone, and to infer its relation with the NL sentence based on the contextualized representations.",
"We formulate it as a binary classification task.",
"For each column c j , we use a one-layer feed forward network f ( ) to get prediction p cj = f ( c j ) of whether c j is mentioned in the sentence or not.",
"The column grounding loss L c is then calculated using the binary cross entropy loss w.r.t. ground truth labels y cj { 0 , 1 } .",
"Note this task requires the model to identify the meaning of a column without access to any of its values.",
"Hence, it is suitable for the typical text-to-SQL setting where the model only has access to the database schema.",
"Value grounding.",
"For clauses like WHERE and HAVING , to generate an executable SQL query, a model also needs to extract the value to be compared with the grounded column from the NL utterance.",
"This can be transformed to the task of finding cell mentions in the NL sentence with a parallel text-table corpus.",
"Since the contents of the table is Dataset # Examples Exec Acc (Suhr et al., 2020) % Col Mentioned ATIS (Hemphill et al., 1990; Dahl et al., 1994) 289 (486) 0.8 0.0 Restaurants (Tang and Mooney, 2000) 27 (378) 3.7 0.0 Academic(Li and Jagadish, 2014b) 180 (196) 8.2 11.4 Yelp(Yaghmazadeh et al., 2017) 54 (128) 19.8 8.0 Scholar(Iyer et al., 2017) 394 (599) 0.5 0.0 Advising(Finegan-Dollak et al., 2018) 309 (2858) 2.3 0.3 IMDB(Yaghmazadeh et al., 2017) 107 (131) 24.6 1.0 GeoQuery(Zelle and Mooney, 1996) 532 (598) 41.6 3.9 Spider (Yu et al., 2018b) 1034 69.0 39.2 Spider-Realistic 508 -1.8 Table 1: Statistic of the datasets used in this work.",
"not available, it is necessary for the model to infer the possible value mentions based on NL utterance and the table schema only.",
"Similarly to column grounding, we also view this as a classification task.",
"For each token x i , we get prediction of x i being part of a grounded value as p vi = f ( x i ) .",
"The value grounding loss L v is then calculated using the binary cross entropy loss w.r.t. ground truth labels y vi { 0 , 1 } .",
"Column-Value mapping.",
"As there may be multiple columns and values used in the SQL query, a text-to-SQL model also needs to correctly map the grounded columns and values.",
"This is used to further strengthen the model's ability to capture the correlation between the two input sequences by learning to align the columns and values.",
"We formulate this as a matching task between the tokens in the NL sentence and the columns.",
"For every grounded token x i (i.e., y vi = 1 ), we pair it with each column c j and calculate the probability of x i matching c j as p cvi,j = f ([ x i , c j ]) .",
"Here [ , ] is the vector concatenation operation.",
"We then apply a softmax layer over the predictions for each token p cvi = { p cvi,j } | c | j =1 , and the final column-value mapping loss L cv is then calculated as L cv = CrossEntropy (softmax ( p cvi ) , y cvi ) , where y cvi { 0 , 1 } | c | is the ground truth label.",
"The final loss L for pretraining is the sum of all three losses.",
"We experimented with different weights for each term, but did not observe significant improvement on the results.",
"Hence we only report results with equally weighted losses.",
"We obtain ground truth labels y c j , y v i and y cv i from a parallel text-table corpus based on a simple intuition: given a column in the table, if any of its cell values can be matched to a phrase in the sentence, this column is likely mentioned in the sentence, and the matched phrase is the value aligned with the column.",
"To ensure high quality text-table alignment information in the pretraining corpus, unlike previous work (Herzig et al., 2020; Yin et al., 2020) that use loosely connected web tables and their surrounding text, here we leverage an existing large-scale table-to-text generation dataset ToTTo (Parikh et al., 2020).",
"ToTTo contains 120,761 NL descriptions and corresponding web tables automatically collected from Wikipedia using heuristics.",
"Additionally, it provides cell level annotation that highlights cells mentioned in the description and revised version of the NL descriptions with irrelevant or ambiguous phrases removed.",
"We experiment with two pretraining settings, with or without human assistance.",
"In the human assisted setting , we use the cell annotations along with the revised description to infer the ground truth labels.",
"More specifically, we first label all the columns c j that contain at least one highlighted cell as positive ( y cj = 1 ).",
"We then iterate through all the values of the highlighted cells and match them with the NL description via exact string matching to extract value mentions.",
"If a phrase is matched to a highlighted cell, we select all the tokens x i in that phrase and align them with the corresponding 1 Unlike Suhr et al. (2020), here we do not consider examples where there is no column compared against entity.",
"columns c j ( y vi = 1 , y cvi,j = 1 ).",
"In the automatic setting , we use only the tables and the raw sentences, and obtain cell annotations by comparing each cell with the NL sentence using exact string matching.",
"Note that in both settings, the cell values are used only for preparing supervision for the pretraining objectives, not as inputs to the pretraining model.",
"To make the pretraining more effective and to achieve a better generalization performance, we also incorporate two data augmentation techniques.",
"First, since the original parallel corpus only contains one table for each training example, we randomly sample K neg tables as negative samples and append their column names to the input sequence.",
"This simulates a database with multiple tables and potentially hundreds of columns, which is common in text-to-SQL.",
"Second, we randomly replace the matched phrases in the NL sentences with values of cells from the same column (the labels are kept the same).",
"This way we can better leverage the contents of the table during pretraining and improve the model's generalization ability by exposing it to more cell values.",
"As one of the first datasets to study cross-database text-to-SQL, Spider has been a widely used benchmark in assessing a model's ability to generalize to unseen programs and databases.",
"However, as pointed out by Suhr et al. (2020), Spider eases the task by using utterances that closely match their paired SQL queries, for example by explicitly mentioning the column names in the question, while in practice NL references to columns usually differ from the original column name.",
"To alleviate this problem, Suhr et al. (2020) propose to train the model with cross-domain dataset like Spider, and add another eight single-domain datasets like ATIS (Hemphill et al., 1990; Dahl et al., 1994) and GeoQuery (Zelle and Mooney, 1996) for evaluation.",
"However, some of the datasets differ a lot from Spider, introducing many novel query structures and dataset conventions.",
"2 As we can see from Table 1, their model (Suhr et al., 2020) has very poor performance in some datasets.",
"In light of this, we present a new realistic and challenging evaluation set based on Spider.",
"We first select a complex subset from 2 Some of the datasets contain operators that are not covered by Spider grammar or novel query structure like self join that does not exist in the training corpus.",
"the Spider dev set where there are columns compared against values or used in clauses like ORDER BY .",
"We then manually modify the NL questions in the subset ourselves to remove or paraphrase explicit mentions of columns names, except for the columns in SELECT clauses, while keeping the SQL queries unchanged.",
"Some examples are shown in Table",
"2. This way we do not introduce extra challenges like adapting to new query structures but make it possible to fairly assess the model's capability in aligning text and tables.",
"To make a more comprehensive comparison, we will also report results on the original Suhr et al. (2020) datasets.",
"Spider and the realistic evaluation sets.",
"Spider (Yu et al., 2018b) is a complex cross-database text-to-SQL dataset.",
"It contains 10k complex question-query pairs grounded on 200 databases where multiple tables are joined via foreign keys.",
"In addition, we create a new realistic evaluation set Spider-Realistic as described in Section 4.",
"We also include the original Suhr et al. (2020) datasets, for a more comprehensive comparison.",
"For the base model, we use RAT-SQL (Wang et al., 2020) which is the state-of-the-art model according to the official leaderboard as of the submission time.",
"To generate executable SQL queries, we modify the pointer generator in RAT-SQL to enable it to copy values from the question.",
"We use the same trained model for evaluation on the Spider dev set and the realistic evaluation sets.",
"Yu et al. (2018b) includes some single-domain text-to-SQL datasets like GeoQuery as extra training data for Spider.",
"Following Suhr et al. (2020), we train the model with only the original Spider data, and discard additional training data used by some previous works like Yu et al. (2018b).",
"We use both the set match accuracy (exact match) Models Spider-Realistic ATIS GeoQuery Restaurants Academic IMDB Yelp Scholar Advising # Examples 508 289 532 27 180 107 54 394 309 S c h e m a O n l y Suhr et al. (2020) -0.8 (0.5) 41.6 (35.6) 3.7 (3.7) 8.2 (6.1) 24.6 (24.3) 19.8 (16.7) 0.5 (0.4) 2.3 (1.2) RAT-SQL w/o value linking w.",
"from the official Spider evaluation script and execution accuracy 3 for evaluation on Spider and Spider-Realistic.",
"On the Suhr et al. (2020) datasets, we use the official evaluation script 4 released by the authors and report execution accuracy.",
"WikiSQL.",
"WikiSQL (Zhong et al., 2017) is a large-scale text-to-SQL dataset consists of over 80k question-query pairs grounded on over 30k Wikipedia tables.",
"Although existing models are already reaching the upper-bound performance on this dataset (Hwang et al., 2019; Yavuz et al., 2018), mainly because of the simplicity of the SQL queries and large amount of data available for training, previous works have also used this dataset to demonstrate the model's generalization ability with limited training data (Yu et al., 2020; Yao et al., 2020).",
"For the base model, we use SQLova (Hwang et al., 2019) without execution-guided decoding.",
"Follow-3 We execute case insensitive SQL queries, and compare the returned table.",
"For all experiments, we use the BERT implementation from Huggingface (Wolf et al., 2020) and the pretrained BERTLARGE model from Google 5 .",
"For pretraining, we use Adam optimizer (Kingma and Ba, 2015) with a initial learning rate of 2e-5 and batch size of 48.",
"In both settings, we use K neg = 1 and pretrains our model for 5 epochs.",
"We use 4 V100 GPUs for pretraining, which takes less than 4 hours.",
"For Spider and the realistic evaluation sets, we use the official implementation of RAT-SQL 6 and modify it to generate executable SQL queries.",
"We follow the original settings and do hyperparam-eter search for learning rate (3e-4, 7.44e-4) and 5 We use the BERT-Large, Uncased (Whole Word Masking) model from https://storage.googleapis.",
"warmup step (5k, 10k).",
"We use the same polynomial learning rate scheduler with warmup and train for 40,000 steps with batch size of 24.",
"The learning rate for the pretrained encoder (e.g. BERT) is 3e-6 and is frozen during warmup.",
"For WikiSQL, we use the official SQLova implementation 7 .",
"We use the default setting with learning rate of 1e-3 for the main model and learning rate of 1e-5 for the pretrained encoder.",
"We train the model for up to 50 epochs and select the best model using the dev set.",
"5.3 Main Results Spider.",
"We first show results on Spider dev set in Table 4.",
"The original Spider setting assumes only the schema information about the target database is known in both training and evaluation phase, as the content of the database may not be accessible to the system due to privacy concern.",
"More recently, some works have tried to using the database content to help understand the columns and link with the NL utterance.",
"Here we show results for both settings.",
"In the first setting where only schema information is known, we disable the value-based linking module in RAT-SQL.",
"As we can see from Table 4, replacing BERTLARGE with STRUG consistently improves the model performance in both settings.",
"Under the setting where content is available, using STRUG achieves similar performance as GRAPPA and outperforms all other models.",
"GRAPPA uses both synthetic data and larger text-table corpus for pretraining.",
"However, it mainly learns inductive bias from the synthetic data while our model focuses on learning text-table association knowledge from the parallel text-table data.",
"In error analysis on the Spider dev set, we notice that our best model 8 corrects 76 out of 270 wrong predictions made by GRAPPA while GRAPPA corrects 80 out of 274 wrong predictions made by our model.",
"This demonstrates that the two pretraining techniques are complementary and we expect combining them can lead to further performance improvement.",
"For results on different difficulty levels and components, please see Appendix B.1.",
"More realistic evaluation sets.",
"Results on the realistic evaluation sets are summarized in Table 3.",
"Firstly, we notice the performance of all models drops significantly on Spider-Realistic, demonstrating that inferring columns without explicit hint is 7 https://github.com/naver/sqlova 8 RAT-SQL w.",
"a challenging task and there is much room for improvement.",
"Secondly, using STRUG brings consistent improvement over BERTLARGE in all realistic evaluation sets.",
"In the Spider-Realistic set, using STRUG also outperforms GRAPPA 9 by 2.9%.",
"Under the original Suhr et al. (2020) setting, combining RAT-SQL with STRUG significantly outperforms Suhr et al. (2020) in all datasets, despite that we do not include WikiSQL as additional training data as they did.",
"Thirdly, comparing results in Table 4 with Table 3, using STRUG brings larger improvement over BERTLARGE in the more realistic evaluation sets.",
"As shown in Table 1, the original Spider dataset has a high column mention ratio, so the models can use exact match for column grounding without really understanding the utterance and database schema.",
"The more realistic evaluation sets better simulate the real world scenario and contain much less such explicit clues, making the text-table alignment knowledge learned by STRUG more valuable.",
"For case studies on Spider-Realistic, please check Section 5.4.",
"WikiSQL.",
"Results on WikiSQL are summarized in Table",
"5. When using the full training corpus, we notice that using STRUG achieves similar performance as BERTLARGE .",
"This is probably because of 9 We use the checkpoint provided by the author, which achieves 73.8% exact match accuracy on the Spider dev set.",
"Here we only evaluate on Spider-Realistic with exact match accuracy because their model does not generate values and includes IMDB and Geo as extra training data.",
"the large size of training data and the simple SQL structure of WikiSQL.",
"To better demonstrate that the knowledge learned in pretraining can be effectively transferred to text-to-SQL task and reduce the need for supervised training data, we also conduct experiments with randomly sampled training examples.",
"From Fig. 4 we can see that with only 1% of training data (around 500 examples), models using STRUG can achieve over 0.70 accuracy, outperforming both BERTLARGE and TaBERT by a large margin.",
"STRUG brings consist improvement over BERTLARGE until we use half of the training data, where all models reach nearly the same performance as using the full training data.",
"We also show the training progress using 5% of training data in Fig.",
"5. We can see that STRUG also helps speed up the training progress.",
"For more break-down results on several subtasks, please see Appendix B.2.",
"Comparison of human assisted and automatic setting.",
"In all benchmarks, we notice that STRUG pretrained using the automatic setting actually performs similarly as the setting where cell annotations are used.",
"This indicates the effectiveness of our heuristic for cell annotation and the potential to pretrain STRUG with more unannotated parallel text-table data.",
"We compare the predictions made by RAT-SQL w.",
"BERTLARGE and w.",
"STRUG (Automatic).",
"Some examples are shown in Table",
"6. In the first example from Spider-Realistic, we can see that the model w.",
"BERTLARGE fails to align tournaments with the tourney_name column, because of string mismatch.",
"In the second example from IMDB, although the model correctly recognizes James Bond as value reference, it fails to ground it to the correct column which is movie_title .",
"This supports our hypothesis that using STRUG helps to improve the structure grounding ability of the model.",
"In this paper, we propose a novel and effective structure-grounded pretraining technique for text-to-SQL.",
"Our approach to pretraining leverages a set of novel prediction tasks using a parallel text-table corpus to help solve the text-table alignment challenge in text-to-SQL.",
"We design two settings to obtain pretraining labels without requiring complex SQL query annotation: using human labeled cell association, or leveraging the table contents.",
"In both settings, STRUG significantly outperforms BERTLARGE in all the evaluation sets.",
"Meanwhile, although STRUG is surprisingly effective (using only 120k text-table pairs for pretraining) and performs on par with models like TaBERT (using 26m tables and their English contexts) and GRAPPA (using 475k synthetic examples and 391.5k examples from existing text-table datasets) on Spider, we believe it is complementary with these existing text-table pretraining methods.",
"In the future, we plan to further increase the size of the pretraining corpus, and explore how to incorporate MLM and synthetic data.",
"Dataset.",
"In this work, we re-purpose an existing table-to-text generation dataset ToTTo (Parikh et al., 2020) for our pretraining.",
"We obtain labels for our three pretraining tasks via weak supervision, which uses only the raw sentence-table pairs, or the cell annotations and revised descriptions that are already included in ToTTo dataset.",
"As a result, no extra human effort is required for collecting our pretraining corpus.",
"We also curate a more realistic evaluation dataset for text-to-SQL based on Spider dev set.",
"In particular, we first select a complex subset from the Spider dev set and manually revise the NL questions to remove the explicit mention of column names.",
"The detailed description of the process can be found in Section 4.",
"The first author manually revised all the questions himself, which results in 508 examples in total.",
"Application.",
"We focus on the task of text-to-SQL, which is a fundamental task for building natural language interfaces for databases.",
"Such interface can enable non-expert users to effortlessly query databases.",
"In particular, here we focus on improving the structure grounding ability of text-to-SQL models, which is critical in real-world use cases.",
"We evaluate our model with the widely used Spider benchmark and several more realistic datasets.",
"Experimental results show that our method brings significant improvement over existing baselines, especially on more realistic settings.",
"Computing cost.",
"We use 4 V100 GPUs for pretraining, and 1 V100 GPU for finetuning the model for text-to-SQL on Spider and WikiSQL.",
"One advantage of our method is its efficiency.",
"In our experiments, we pretrain the model for only 5 epochs, which can finish within 4 hours.",
"For comparison, the largest TaBERT model (Yin et al., 2020) takes 6 days to train for 10 epochs on 128 Tesla V100 GPUs using mixed precision training.",
"We thank Bo Pang, Tao Yu for their help with the official Spider evaluation.",
"We also thank anonymous reviewers for their constructive feedback."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"method",
"abstain",
"result",
"result",
"objective",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"result",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"other"
] |
[
"We introduce the largest transcribed Arabic speech corpus, QASR 1 , collected from the broadcast domain.",
"This multi-dialect speech dataset contains 2 , 000 hours of speech sampled at 16 kHz crawled from Aljazeera news channel.",
"The dataset is released with lightly supervised transcriptions, aligned with the audio segments.",
"Unlike previous datasets, QASR contains linguistically motivated segmentation, punctuation, speaker information among others.",
"QASR is suitable for training and evaluating speech recognition systems, acousticsand/or linguisticsbased Arabic dialect identification, punctuation restoration, speaker identification, speaker linking, and potentially other NLP modules for spoken data.",
"In addition to QASR transcription, we release a dataset of 130 M words to aid in designing and training a better language model.",
"We show that end-to-end automatic speech recognition trained on QASR reports a competitive word error rate compared to the previous MGB2 corpus.",
"We report baseline results for downstream natural language processing tasks such as named entity recognition using speech transcript.",
"We also report the first baseline for Arabic punctuation restoration.",
"We make the corpus available for the research community.",
"Research on Automatic Speech Recognition (ASR) has attracted a lot of attention in recent years (Chiu et al., 2018; Watanabe et al., 2018).",
"Such success has brought remarkable improvements in reaching human-level performance (Xiong et al., 2016; Saon et al., 2017; Hussein et al., 2021).",
"This has been achieved by the development of large spoken corpora: supervised (Panayotov et al., 2015; Ardila et al., 2019); semi-supervised (Bell et al., 2015; Ali 1 QASR (cid:81)(cid:229)(cid:148)(cid:16)(cid:175) in Arabic means Palace.",
"et al., 2016); and more recently unsupervised (Valk and Alumae, 2020; Wang et al., 2021) transcription.",
"This work enables to either reduce Word Error Rate (WER) considerably or extract metadata from speech: dialect-identification (Shon et al., 2020); speaker-identification (Shon et al., 2019); and code-switching (Chowdhury et al., 2020b, 2021).",
"Natural Language Processing (NLP), on the other hand values large amount of textual information for designing experiments.",
"NLP research for Arabic has achieved a milestone in the last few years in morphological disambiguation, Named Entity Recognition (NER) and diacritization (Pasha et al., 2014; Abdelali et al., 2016; Mubarak et al., 2019).",
"The NLP stack for Modern Standard Arabic (MSA) has reached very high performance in many tasks.",
"With the rise of Dialectal Arabic (DA) content online, more resources and models have been built to study DA textual dialect identification (Abdul-Mageed et al., 2020; Samih et al., 2017).",
"Our objective is to release the first Arabic speech and NLP corpus to study spoken MSA and DA.",
"This is to enable empirical evaluation of learning more than the word sequence from the speech.",
"In our view, existing speech and NLP corpora are missing the link between the two different modalities.",
"Speech poses unique challenges such as disfluency (Pravin and Palanivelan, 2021), overlap speech (Tripathi et al., 2020; Chowdhury et al., 2019), hesitation (Wottawa et al., 2020; Chowdhury et al., 2017), and code-switching (Du et al., 2021; Chowdhury et al., 2021).",
"These challenges are often overlooked when it comes to NLP tasks, since they are not present in typical text data.",
"In this paper, we create and release 2 the largest corpus for transcribed Arabic speech.",
"It comprises of 2 , 000 hours of speech data with lightly supervised transcriptions.",
"Our contributions are: ( i ) 2 Data can be obtained from: https://arabicspeech.org/qasr aligning the transcription with the corresponding audio segments including punctuation for building ASR systems; ( ii ) providing semi-supervised speaker identification and speaker linking per audio segments; ( iii ) releasing baseline results for acoustic and linguistic Arabic dialect identification and punctuation restoration; ( iv ) adding a new layer of annotation in the publicly available MGB2 testset, for evaluating NER for speech transcription; ( v ) sharing code-switching data between Arabic and foreign languages for speech and text; and finally, ( vi ) releasing more than 130 M words for Language Model (LM).",
"We believe that providing the research community with access to multi-dialectal speech data along with the corresponding NLP features will fos-ter open research in several areas, such as the analysis of speech and NLP processing jointly.",
"Here, we build models and share the baseline results for all of the aforementioned tasks.",
"The CallHome task within the NIST benchmark evaluations framework (Pallett, 2003), released one of the first transcribed Arabic dialect dataset.",
"Over years, NIST evaluations provided with more dialectal mainly in Egyptian and Levantine dialects, as part of language recognition evaluation campaign.",
"Projects such as GALE and TRANSTAC (Olive et al., 2011) program, released more than 251 hours of Arabic data, including the first spoken Iraqi dialect among others.",
"These datasets exposed the research community to the challenges of spoken dialectal Arabic and motivated to design competition to handle dialect identification, dialectal ASR among others (see Ali et al. (2021) for details).",
"The following datasets are released from the Multi-Genre Broadcast MGB challenge:",
"(i) MGB-2 (Ali et al., 2016) this dataset is the first milestone towards designing the first large scale continuous speech recognition for Arabic language.",
"The corpus contains a total of 1 , 200 hours of speech with lightly supervised transcriptions and is collected from Aljazeera Arabic news channel span over many years.",
"(ii) MGB3 (Ali et al., 2017) focused on only Egyptian Arabic broadcast data comprises of 16 hours.",
"(iii) MGB5 (Ali et al., 2019) consists of 13 hours of Moroccan Arabic speech data.",
"In addition, the CommonVoice 3 Ara-3 https://commonvoice.mozilla.org/en/ datasets Table 1: Comparison between MGB2 vs QASR.",
"bic dataset, from the CommonVoice project, provides 49 hours of modern standard Arabic (MSA)",
"4 Unlike MGB2 , QASR dataset is the largest multi-dialectal corpus with linguistically motivated segmentation.",
"speech data.",
"The dataset includes multi-layer information that aids both speech and NLP research community.",
"QASR is the first speech corpora to provide resources for benchmarking NER, punctuation restoration systems.",
"For close comparison between MGB2 vs QASR, see Table",
"1. 2 Corpus Creation 2.1 Data Collection We obtained Aljazeera Arabic news channel's archive (henceforth AJ), spanning over 11 years from 2004 until 2015.",
"It contains more than 4 , 000 episodes from 19 different programs.",
"These programs cover different domains like politics, society, economy, sports, science, etc.",
"For each episode, we have the following: ( i ) audio sampled at 16 KHz; ( ii ) manual transcription, the textual transcriptions contained no timing information.",
"The quality of the transcription varied significantly; the most challenging were conversational programs in which overlapping speech and dialectal usage was more frequent; and finally ( iii ) some metadata.",
"For better evaluation of the QASR corpus, we reused the publicly available MGB-2 (Ali et al., 2016) testset as it has been manually revised, coming from the same channel, thus making this testset ideal to evaluate the QASR corpus.",
"It is worth noting that we ensure that the MGB-2 dev/test sets 4 Reported on June 2021.",
"are not included in QASR corpus, so they can be used to report progress on the Arabic ASR challenge.",
"We have also enriched the MGB2 testset with manually annotated speaker information like country 5 , gender of the speakers, along with NER information and used it to evaluate our baselines.",
"Moreover, we apply topic classification and dialect identification.",
"Our models achieved an overall accuracy of 96 % and 88 % respectively, which have been measured on internal testsets also created from Aljazeera news articles.",
"More details can be found in ASAD demo paper (Hassan et al., 2021).",
"Table 2 gives a rough estimate about distributions in the updated MGB2 testset.",
"Most of the recorded programs have the following metadata: program name, episode title and date, speaker names and topics of the episode.",
"Majority of metadata information appear in the beginning of the file.",
"However, some of them are embedded inside the episode transcription.",
"Figure 1 shows a sample input file from Aljazeera.",
"One of the main challenges is the inconsistency in speaker names, e.g. Barack Obama appeared in 9 different forms ( Barack Obama , Barack Obama/the US President , 5 We use ISO 3166 for country codes.",
"Barack Obama/President of USA , Barck Obama (typo),",
"etc.).",
"The list of guest speakers and episode topics are not comprehensive, with many spelling mistakes in the majority of metadata field names and attributes.",
"To overcome these challenges, we applied several iterations of automatic parsing and extraction followed by manual verification and standardization.",
"Sample output file from QASR is shown in Figure",
"2. It contains speaker names as they appear in the current episode and their corresponding standardized forms across all files, which can be useful for tasks such as speaker identification and speaker linking across the entire corpus.",
"For each speaker, we provide gender information and whether the speaker's name refers to a unique person (e.g. Barack Obama ) or not (e.g. One of the protesters , or an audio reporter ).",
"Figure 2 has information on the anchor speaker and two guests as they appear in the metadata file, in addition to other speakers that were missed in the original transcription.",
"It is worth noting that we provide gender and country for common Arabic speakers (who have at least 20 segments in the entire corpus).",
"On the other hand, we ignore metadata for foreign speakers because dubbing their speeches can be done by any voice-over.",
"We provide gender information for 2 , 000 speakers and this covers 82 % of all segments in the whole corpus.",
"Speech and text are aligned (see details in Section 2.3) and split into short segments (see Section Figure 2: Sample output text file from QASR (XML).",
"2.5).",
"For each segment, we provide: words ( element ), timing information ( starttime and endtime ) in addition to speaker ID ( who ), Average Word Duration ( AWD ) in seconds, Grapheme Match Error Rate ( GWER ), and Word Match Error Rate ( WMER ).",
"For details about word and grapheme match, refer to (Bell et al., 2015; Ali et al., 2016).",
"Figure 2 shows information for Segment 1 that appears in Figure",
"1. 2.3 Speech to Text Alignment The main concept of this method is to run an Arabic speech recognition system over the entire episode (Khurana and Ali) and use the recognized word sequences and their locations in time for automatic alignment (Braunschweiler et al., 2010).",
"For alignment, Aljazeera and ASR transcriptions are then converted into two long sequences of words.",
"Aligning the sequences was challenging for many reasons; code-switching between MSA and dialects; human transcription was not verbatim, e.g. some spoken words were dropped due to repetition or correction; spelling and grammar mistakes; usage of foreign languages mainly English and French; and many overlapped speeches.",
"We used SmithWaterman algorithm (Smith et al., 1981), which performs local sequence alignment to determine similar regions between two strings.",
"We modified the algorithm to accept an approximate match between the given transcription and the recognized word sequence.",
"If the Lev-enshtein distance between two words half the length (number of characters) in the given transcription, this is considered as an approximate match.",
"Figure 3 shows a sample alignment, where each word is assigned to a speaker after parsing Aljazeera text and aligned, if possible, to a word from ASR transcription along with its timing informa-Figure 3: Alignment of Aljazeera transcription & ASR tion.",
"Relaxation is applied in case of approximate match.",
"Time information of the missing words (highlighted in red in AJ column) is estimated by interpolation from the matched word before and after.",
"In this example, we consider words (cid:233)(cid:74)(cid:46)(cid:28)(cid:46)(cid:130)(cid:29)(cid:46) (cid:44) (cid:73)(cid:46)(cid:28)(cid:46)(cid:130)(cid:29)(cid:46) (because-of, because-of-it) as approximate match.",
"Figure 4 shows the matching accuracy between the ASR and the given transcription at the segment level.",
"We applied two levels of matching to deal with these challenges: exact match (where both transcription and ASR output are identical), and approximate match (where there is a forgiving edit distance between words in the transcription and ASR output).",
"Exact match ( 100 % in the x-axis ) would have led to less than 27 % of the segments, while approximate match allows to consider more segments.",
"Unlike MGB-2, we considered many factors that we believe lead to better and logical segmentation, namely: Surface: We tried to make segments in the range of [ 3 10 ] words.",
"We consider punctuation 6 as end of segments if they appear in this range, and we increase the window to 5 words to capture any of them in the neighbouring words.",
"Typically, transcribers insert punctuation marks to indicate end of logical segments (sentences or phrases).",
"Dialog: When a speaker changes in the transcribed text, we consider this as a valid end of segment.",
"By doing this, we assign only one speaker to each segment.",
"Acoustics: If there is a silence duration of at least 150 msec between words, we consider this as a signal to potentially end the current segment.",
"We consider the proceeding linguistic rules to confirm the validity of this end.",
"Linguistics: For linguistically motivated segmentation, we want to avoid ending segments in wrong places (e.g. in the middle of Named Entities (NE), Noun Phrases (NP) or Adjective Phrases (AP)).",
"To do so, from the 130 M words in the LM data, we extracted the most frequent 10 K words that were not followed by any punctuation in 90 % of the cases, then we revised them manually 7 .",
"We call this list NO-STOP-LIST.",
"Examples are: (cid:232)(cid:65)(cid:109)(cid:46)(cid:26)(cid:16)(cid:39)(cid:65)(cid:75)(cid:46) (cid:44) (cid:248)(cid:10)(cid:88) (cid:13)(cid:241)(cid:75)(cid:10) (cid:44) (cid:250)(cid:10)(cid:9)(cid:175) (in, leads-to, towards).",
"Additionally, we used the publicly available Arabic NLP tools (Farasa) 8 for NER and Part Of Speech (POS) tagging to label each word in the transcriptions.",
"We put marks to avoid ending segments in the middle of NEs, NPs or APs.",
"These are some examples from MGB-2 that have segmentation errors and words appearing erroneously in different segments: (cid:128)(cid:65)(cid:9)(cid:74)(cid:203)(cid:64)(cid:47) (cid:200)(cid:65)(cid:211)(cid:14)(cid:64) (People's (seg i ) /hopes (seg i+1 )), (cid:16)(cid:233)(cid:74)(cid:10)(cid:107)(cid:46)(cid:80)(cid:65) (cid:9)(cid:103)(cid:47) (cid:250)(cid:10)(cid:171)(cid:65)(cid:130)(cid:211) (external /endeavors) and (cid:16)(cid:233)(cid:74)(cid:10)(cid:186)(cid:75)(cid:10)(cid:81)(cid:211)(cid:13)(cid:66)(cid:64)(cid:47) (cid:16)(cid:232)(cid:89)(cid:106)(cid:16) (cid:74)(cid:214) (cid:207) (cid:64) (cid:16) (cid:72)(cid:65)(cid:75) (cid:10)(cid:66)(cid:241)(cid:203)(cid:64) (United States /of America).",
"If the surface or acoustics modules suggest end of segment, while contradicting these linguistics rules, this suggestion is ignored.",
"Details of QASR corpus after alignment and segmentation are presented in Table",
"3. 2.6 Intrasentential Code-Switching We discuss here the presence of intrasentential code-switching in QASR.",
"We noticed in addition 6 Common punctuation marks are: Period, Comma, Question mark, Exclamation mark, Semicolon, Colon and Ellipsis.",
"to the intrasentential dialectal code switching (dis-cussed in Section 3.4), the dataset also includes 6 K segments, where alternation between Arabic and English/French languages are seen.",
"To quantify the amount of code-switching present in this data, we calculate both the utterance and corpus level Code-Mixing Index (CMI), motivated by Chowdhury et al. (2020b); Gamback and Das (2016).",
"Based on the range of utterance-level CMI values, we group our dataset, as shown in Table",
"4. As for the corpus-level CMI, we observe an average of 30 .",
"5 CMI-value, calculated based on the average of utterance-level 9 CMI considering the code-switching segments in QASR dataset.",
"Furthermore, from utterance-level analysis, we notice that the majority of the code-switched segments falls under 15 < CMI 30% with an average of 2 alteration points per segment (e.g. Ar En Ar).",
"Even though the code-switching occurs in only 0 .",
"4% of the full dataset, we notice that we have very short 968 segments (ranging CMI value > 30%) with frequent alternating language code, such as: (cid:248)(cid:10)(cid:89)(cid:9)(cid:74)(cid:171) duplex (cid:64)(cid:241)(cid:107)(cid:46) (cid:16)(cid:233)(cid:9)(cid:74)(cid:28)(cid:10)(cid:9)(cid:74)(cid:109)(cid:46)(cid:26)(cid:39)(cid:46) Building.",
"In the future, these segments could be used to further explore the effect of such code-switching in the performance of speech and NLP models jointly.",
"In this section, we study QASR dataset for the ASR task.",
"We adopt the End-to-End Transformer (E2E-T) architecture from Hussein et al. (2021) as our baseline for QASR dataset.",
"We first augment the speech data with the speed perturbation with speed factors of 0 .",
"9 , 1 .",
"0 and 1 .",
"1 (Ko et al., 2015).",
"Then, we extract 83 -dimensional feature frames consisting of 80 -dimensional log Mel-spectrogram and pitch features (Ghahremani et al., 2014) and apply 9 Excluding switches between the utterances.",
"cepstral mean and variance normalization.",
"Furthermore, we augment these features using the specaug-ment approach (Park et al., 2019).",
"We use Espnet (Watanabe et al., 2018) to train the E2E-T model on MGB-2 and QASR datasets.",
"Each model was trained for 30 epochs using 4 NVIDIA Tesla V 100 GPUs, each with 16 GB memory, which lasted two weeks.",
"Results of the baseline model on both development and testsets are shown in Table",
"5. It can be seen that the best E2E-T-MGB2 achieves slightly better WER with a difference of 0 .",
"3 % on average.",
"This is expected since adopted E2E-T architecture was carefully tuned on MGB2 dataset.",
"However, the E2E-T-QASR achieves lower substitution and insertion rates with an absolute difference of 2 .",
"7 % and 0 .",
"5 % on average respectively.",
"It can also be noticed that almost half of the E2E-T-QASR errors are due to deletions.",
"To investigate these results further, we visualize the distribution of segmentation duration of the MGB-2 train, the QASR train and the testsets as shown in Figure",
"5. We consider the range within 3 standard deviations of each distribution as the effective segmentation duration that contains 99 % of the segments, and the rest 1 % of the segments are considered as outliers.",
"From Figure 5, it can be seen that QASR distribution is following the bell curve similar to the testset which was segmented by an expert transcriber.",
"On the other hand, the MGB2 distribution is right-skewed with segment duration outliers that go beyond 50 seconds.",
"In addition, one can observe that the effective segmentation duration of the testset is 9 seconds, which is larger than QASR effective segmentation duration, which is only 7 seconds.",
"On the other hand, the MGB2 effective segmentation duration covers a much larger range of over 30 seconds.",
"The difference in the segment duration affects the statistical properties of the data and causes a shift in the data distribution.",
"We think that this is the main reason why the baseline E2E-T-QASR achieves worse results than best E2E-T-MGB2 .",
"To validate our assumption, we analyze the E2E-T-QASR transcription and found that the deletion errors mainly appeared with segments that are larger than 7 seconds.",
"We illustrate our find-ings with two transcription examples in Buckwalter (BW) format shown in Figure 6: short segment of 6 seconds, and long segment of 10 seconds.",
"Deletions are highlighted in red, substitutions in yellow, and correct in green.",
"It can be seen from the short example that E2E-T-QASR achieves better results with a potential for code-switching.",
"On the other hand, the long example confirms our assumption about the shift in segments duration distribution between QASR and the testset.",
"3.2 Automatic Punctuation Restoration Cl.",
"In this section, we explore QASR for the automatic punctuation restoration task.",
"To prepare the training data, we first segment the utterances from the same speaker with a maximum window of 120 tokens.",
"We then remove utterances with 6 words and no punctuation in the segment.",
"We pre-process the lexical utterances, removing diacritics, brackets, among others.",
"For the task, we only keep the top 3 punctuation classes ( (cid:44) ', (cid:63) ' and .') and rest are mapped to class O' representing no punctuation .",
"The distribution of punctuation in QASR are highly imbalanced (as shown in Table 6), which is expected of a spoken corpus.",
"However, in comparison to the Fisher corpus (Cieri et al., 2004) and other language datasets (see (Li and Lin, 2020)), the distribution is more skewed.",
"This is because in Arabic, punctuation marks are rarely used, e.g., Segment1 in Figure 1, can be logically divided into two segments separated by a full stop.",
"We adapt a simple transformer-biLSTM architecture (Alam et al., 2020) as our baseline model using lexical information.",
"Given an input token sequence ( x 1 , x 2 ..., x m ) , we extract the subwords ( s 1 , s 2 ..., s n ) using wordpiece tokenizer.",
"These subwords are fed into the pre-trained BERT model, which outputs a vector of d dimension for each time step.",
"These d vectors are then passed to a BiLSTM",
"layer, consisting of h hidden units.",
"The choice of using BiLSTM is to make effective use of both past ( h ) and future ( h ) contexts for prediction.",
"The concatenated h + h output at each time step is then fed to a fully-connected layer with four output neurons, which correspond to 3 punctuation marks and the O' token.",
"input subword sequence.",
"10 For this task, we used AraBERT (Antoun et al., 2020): pre-trained on newspaper articles, containing 3 transformer self attention layers with each hidden layer of 768 .",
"These token embeddings are then passed onto a BiLSTM with hidden dimension of 768 .",
"The baseline model is trained using Adam optimizer with a learning rate of 1 e 5 and 32 batch size for 10 epochs.",
"Despite the fact that Arabic has a skewed distribution in punctuation, the baseline results reported in Table 7 for the 3 punctuation and O' labels show that the prediction results of the full stop and the question mark are better than the comma.",
"This again reconfirms that in Arabic, the use of comma is highly debatable (Mubarak et al., 2015; Mubarak and Darwish, 2014) and can easily be substituted by the full stop or other punctuation.",
"In the future, we will explore better architectures with information from different modalities, such as acoustics.",
"10 The maximum length of the subwords is set to 256 .",
"In cases, if the sequence exceeds the maximum length, it is then divided into two separate sequences.",
"One of the biggest challenges in broadcast domain is its speech diversity.",
"The anchor speaker voice is often clear and planned.",
"However, the spoken style 11 of different program guests can present various challenges.",
"Here, we showcase how QASR could be used to evaluate existing speaker models based on the speakers' role in each episode.",
"In the future, the dataset can also be used to study turn-taking and speaker dynamics, given the interaction between speakers in QASR.",
"We adapt one of the widely-known architectures used to model an end-to-end text-independent Speaker Recognition (SR) system.",
"For the study, we use a pre-trained model, with four temporal convolution neural networks followed by a global (sta-tistical) pooling layer and then two fully connected layers.",
"The input to the model is MFCCs features (with 40 coefficient) computed with a 25 msec window and 10 ms frame-rate from the 16 KHz audio.",
"The model is trained on Voxceleb1 (Nagrani et al., 2017) development set (containing 1 , 211 speakers and 147 K utterances).",
"More details can be found in Shon et al. (2018); Chowdhury et al. (2020a).",
"For speaker verification, we use verified same/different-speaker pairs of speech segments as input.",
"We extract the length normalized embed-11 The style can vary based on language fluency, speech rate, use of different dialects among other factors.",
"dings from the last layer of the SR model and then computed the cosine similarity between pairs.",
"For our evaluation, we constructed these verification pair trials by randomly picking up 40 K utterance pairs from: ( i ) speakers of the same gender; ( ii ) similar utterance lengths; and ( iii ) a balanced distribution between positive and negative targets 12 .",
"For this, we use the most frequent 20 anchor and 20 guest speakers data subset described in Table 8.",
"We then compare the Equal Error Rate (EER) of the model, reported in Table 9, using the designed verification pairs based on a particular job role, or their combination.",
"In addition, we also report the results on VoxCeleb1 official verification testset as a reference.",
"From the results, we observe that the SR model effectively distinguishes between the positive and negative pairs with 70 % (A) 72% (G) accuracy.",
"Comparing the EER, we notice that it is harder to differentiate between anchors than guests.",
"This can be due to the fact that anchors are using the same acoustic conditions, and the current models are learning recording conditions (Chowdhury et al., 2020a) as well as speaker information.",
"To understand the dialectal nature of QASR dataset, we analyze the acoustic and lexical representations for 100 segments from each speaker 13 .",
"To obtain the dialect labels, we run the pretrained dialect identification models for both speech and text modality.",
"We address the dialect identification as multi-stage classification: Firstly, we predict the labels of the segments MSA vs DA and, secondly, if the label is DA, we further propagate the labels to detect the country of the selected speaker (i.e fine-grained dialect classifica-tion).",
"For country level evaluation, we manually annotate each speaker's country label (see Table 8).",
"For lexical modality, we use the pre-trained QADI (Abdelali et al., 2020), and for the acoustic modality, we use ADI5 14 (Shon et al., 2017; Ali et al., 2019) as MSA vs DA classifier along with ADI17 15 (Shon et al., 2020) for fine-grained labels.",
"We observe that in both the modalities, 50 % of the anchors speak MSA in 70 % of the time in speech and 90 % of the time in text.",
"As for the other 50 %, we notice that using the dialect identification modules, we can detect only 20 % of the speaker's nationality correctly.",
"The aforementioned observations are pre-anticipated, as anchors are professionally trained to speak mostly in MSA, making it harder for the model to predict the correct country label.",
"This also explains why the large portion of the data is MSA.",
"As for guest speakers, we notice that the lexical classifier detected that 30 % of the speakers use MSA, while 70 % of the speakers were detected as DA.",
"As for the acoustic models, we notice that all speakers use dialects more than 70 % of the time.",
"Comparing the accuracy of identifying the correct dialects based on annotated country labels, we notice that both the text and acoustic models perform comparatively better in identify the guest speakers' country 64 % from text and 65 % from acoustic.",
"Our hypothesis for such increase in performance is that guest speakers, unlike the anchors, mostly speak using their dialects, making it easier for the model to infer their country.",
"When comparing the decision from both modalities, we notice that there is an agreement of 67 .",
"5 % ( 65 % for anchor and 70 % for guest speakers) for MSA/DA classification.",
"Most of the classification errors in speech and text dialect identification models are due to confusion between dialects spoken in neighboring countries; e.g. Syria and Lebanon in the Levantine region; Tunisia and Algeria in the North African region.",
"NER is essential for a variety of NLP applications such as information extraction and summarization.",
"There are many researches on Arabic NER for news articles, e.g. ANERcorp (Benajiba and Rosso, 2008) and microblogs (Darwish, 2013).",
"However, we are not aware of any studies or datasets for NER in Arabic news transcription, which can be useful for applications like video search.",
"We manually annotate and revised the MGB2 testset for basic NE types, namely Person (PER), Location (LOC), Organization (ORG) and Others (OTH/MISC) following the guidelines in (Benajiba and Rosso, 2008).",
"The testset ( 70 K words) along with NER annotation is available as part of QASR.",
"From the annotation, we observed NEs are 7 % of the corpus and their distribution is as follows: PER= 32 %, LOC= ANERcorp QASR Type P R F1 P R F1 PER 87 .",
"We test the publicly available Arabic Farasa NER on our new testset and compare performance with the standard news testset (ANERcorp).",
"Results are listed in Table 10.",
"As shown, testing NER on transcribed speech has lower F1 by 15 % compared to testing on a standard news testset (from 84 . 3 % to 69 . 8 %).",
"We anticipate that characteristics of speech transcription described in Section 2.3 affected NER negatively 17 .",
"We keep enhancing NER for speech transcription for future work.",
"In this paper, we introduce a 2 , 000 hours transcribed Arabic speech corpus, QASR.",
"We report results for automatic speech recognition, Arabic dialect identification, speaker verification, and punctuation restoration to showcase the importance and usability of the dataset.",
"QASR is also the first Arabic speech-NLP corpus to study spoken modern standard Arabic and dialectal Arabic.",
"We report for the first time named entity recognition in Arabic news transcription.",
"The 11 , 092 unique speakers present in QASR can be used to study turn-taking and speaker dynamics in the broadcast domain.",
"The corpus can also be useful for unsupervised methods to select speaker for text to speech (Galle-gos et al., 2020).",
"The QASR is publicly available for the research community.",
"This work was made possible with the collaboration between Qatar Computing Research Institute, HBKU and Aljazeera media network.",
"This data is hosted on ArabicSpeech portal 18 , which is a community based effort that runs for the benefit of Arabic speech science and technologies.",
"QASR dataset only includes programs that have been broadcast by the Aljazeera news media.",
"No additional identity of the guest is revealed in the data, which was made anonymous in the original program.",
"However, in the future, if any concern is raised for a particular content, we will comply to legitimate concerns by removing the affected content from the corpus.",
"Any biases found in the dataset are unintentional, and we do not intend to do harm to any group or individual.",
"The bias in our data, for example towards a particular gender is unintentional and is a true representation of the programs.",
"We do address these concerns by collecting examples from both parties before any general suggestion.",
"As for the assigned annotation label, we follow a well-defined schema and available information to perceive a final label.",
"For e.g. gender label male/female is perceived from the data and might not be a true representative of the speakers' choice.",
"We request the research community to be aware that our dataset can be used to misuse quotes for the speakers for political or other gain.",
"If such misuse is noticed, human moderation is encouraged in order to ensure this does not occur."
] | [
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"result",
"other",
"other",
"method",
"other",
"method",
"abstain",
"other",
"method",
"method",
"result",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain"
] |
[
"More than 43% of the languages spoken in the world are endangered, and language loss currently occurs at an accelerated rate because of globalization and neocolonialism.",
"Saving and revitalizing endangered languages has become very important for maintaining the cultural diversity on our planet.",
"In this work, we focus on discussing how NLP can help revitalize endangered languages.",
"We first suggest three principles that may help NLP practitioners to foster mutual understanding and collaboration with language communities, and we discuss three ways in which NLP can potentially assist in language education.",
"We then take Cherokee, a severely-endangered Native American language, as a case study.",
"After reviewing the language's history, linguistic features, and existing resources, we (in collaboration with Cherokee community members) arrive at a few meaningful ways NLP practitioners can collaborate with community partners.",
"We suggest two approaches to enrich the Cherokee language's resources with machine-in-the-loop processing, and discuss several NLP tools that people from the Cherokee community have shown interest in.",
"We hope that our work serves not only to inform the NLP community about Cherokee, but also to provide inspiration for future work on endangered languages in general.",
"1 1 Introduction There are an estimated 6000 to 7000 spoken languages in the world, and at least 43% of them are endangered.",
"2 Throughout history, languages have naturally shifted and declined into dormancy.",
"The current speed of language loss, however, is far beyond natural.",
"Some linguists estimate that between 50% and 90% of languages will be 1 Our code and data will be open-sourced at https:// github.com/ZhangShiyue/RevitalizeCherokee .",
"severely endangered or dead by the end of this cen-tury (Austin and Sallabank, 2011).",
"This acceleration of language endangerment owes largely to cultural, political, and economic marginalization and the rise of global imperialism.",
"Worldwide, indigenous people have suffered from colonization or conquest and given up their mother tongues in favor of another language.",
"In order to achieve a higher social status, indigenous people have had to capitulate to colonizers' linguistic norms.",
"Following Ladefoged (1992), we acknowledge that burdens such as raw material survival outweigh the more abstract concerns of maintaining a language.",
"In other words, we cannot blame or fault indigenous people for giving up their languages in order to secure a better life under intense socioeconomic pressures.",
"As linguists and NLP researchers, we have the responsibility to address these power imbalances and create a society where space exists for indigenous languages.",
"Moreover, language loss is memory loss, identity loss, culture loss, and knowledge loss, and it even affects the health of indigenous people (Whalen et al., 2016).",
"Endangered languages are even more underrepresented in the NLP literature.",
"Joshi et al. (2020) point out that more than 88% of the world languages spoken by around 1.2 billion people are left behind , i.e., they have been and are still ignored in the aspect of language technologies.",
"Blasi et al. (2021) show that linguistic NLP tasks (e.g., morphology analysis) are more language inclusive than user-facing NLP tasks (e.g., machine trans-lation).",
"In this information age, NLP techniques are widely applied on the Internet.",
"Much Internet content that we are exposed to daily is processed or even created by NLP techniques.",
"Hence, the lack of NLP technology support for endangered languages reduces the degree to which users are exposed to them.",
"Unfortunately, this exacerbates the problem of linguistic marginalization, as frequent language exposure is critical to language acquisi-1529 tion.",
"At worst, it can generate a downward spiral: since fewer speakers create content using these languages, the scarcity of resources will in turn hinder the development of NLP technologies.",
"On the other hand, the majority of NLP research is biased towards high-resource languages, neglects diverse linguistic typologies (Joshi et al., 2020), and often relies on the availability of large-scale data.",
"Including endangered languages can help diagnose NLP models' generalizability (Bender, 2011) and push towards universal and data-efficient approaches.",
"In this work, we address three important steps on the roadmap of NLP for language revitalization: starting from before NLP to NLP for language education to language-specific NLP research.",
"Before diving into NLP research, we first suggest that NLP practitioners, who are often outsiders of indigenous communities, become aware of three important principles: understand and respect first , decolonize research , and build a community .",
"We especially want to promote building a community .",
"Since few people are speaking, learning, or studying an endangered language, the knowledge of each individual, the collected resources, and the developed models should be shared as widely and sustainably as possible.",
"Hence, we need a community to support this (see Section 2).",
"Second, language revitalization is an attempt to reverse the decline of a language (Tsunoda, 2013).",
"Fundamentally, this requires an increase in the number of active speakers to bring the language back to day-to-day use (Austin and Salla-bank, 2011).",
"Due to the lack of inter-generation transmission, language education in school or online is important.",
"We introduce three approaches for applying NLP techniques in assisting language education (Section 3): automated quiz generation , automated assessment , and community-based language learning .",
"The last approach connects to our previous point about building a community.",
"Next, we introduce the case study of Cherokee an endangered 3 Native American language with only 2,000 fluent first-language speakers remaining.",
"We first review its history (Section 4.1) to understand how social, political, and economic repression have harmed the Cherokee people and their language.",
"Then, we discuss a few linguistic distinctions of Cherokee (Section 4.2), including polysynthesis, word order, etc., which can help 3 UNESCO has identified the dialect of Cherokee in Oklahoma is definitely endangered, and the one in North Carolina is severely endangered.",
"us design linguistically informed NLP models.",
"In Section 5, we review some existing high-quality Cherokee resources and propose two methods to enrich resources: community-based resource collection (which also relates to our previous point of building a community) and automatic data mining .",
"Lastly, based on conversations with some Cherokee speakers/researchers, we dive deep into several NLP tools that seem advantageous for community members and may be able to create new usage domains for the language, and we point out the key challenges of their development (Section 6).",
"In summary, we propose suggestions to NLP practitioners, approaches of NLP-assisted language education, and directions for Cherokee language processing.",
"We hope that our work can increase awareness of Cherokee and encourage more work on minority languages.",
"Last but not the least, the authors of this work come from both the Cherokee community (Ben-jamin E. Frey) and the NLP community (Shiyue Zhang and Mohit Bansal).",
"Prof. Benjamin E. Frey is a proficient second-language Cherokee speaker and a citizen of the Eastern Band of Cherokee Indians.",
"He has been teaching Cherokee and contributing to Cherokee revitalization for more than 10 years.",
"He initiated our collaboration and continues bridging the gap between the Cherokee language and language technologies.",
"In addition, we have been talking with some other Cherokee community members, including David Montgomery and Eva Marie Garroutte.",
"Prof. Eva Marie Garroutte from Boston College said: As a citizen of the Cherokee Nation, I am very concerned for the preservation of my tribe's endangered language and I am convinced that Dr. Frey's work represents the most promising project known to me for advancing this goal.",
"Though by no means the views of this paper can represent the whole Cherokee community, our proposals are strongly initiated/motivated by Cherokee community members and grounded by NLP practitioners.",
"We suggest NLP practitioners, who are often outsiders of the indigenous communities, three general principles to follow before conducting NLP research on endangered indigenous languages.",
"Understand and Respect First.",
"Meaningful advances in building speech and language technologies for under-resourced languages hinge upon be-1530 ing able to understand those languages' speaker communities and their needs.",
"Although the initial temptation among NLP researchers might be to dive in with questions about particular computational tools, that conversation cannot unfold until the speaker communities' more basic needs are met: the need for respect, reciprocity, and understanding .",
"It may be tempting to say this is outside the scope of our current research, yet these kinds of behaviors and assumptions are the very behaviors that led to the disenfranchisement of these groups.",
"When we ignore someone's common humanity and assume that our need for control over the narrative and the situation is greater than their need to be seen and respected, we participate in the same marginalizing and dehumanizing behaviors that led to the problem we are purporting to address.",
"Therefore, it is instrumental that we address the cultural practices and social norms of endangered language communities before assuming we know how to position ourselves, them, and our research within their communities.",
"Decolonize Research.",
"Decolonizing research is to place indigenous voices and needs in the center of the research process (Smith, 1999 Datta, 2018 Bird, 2020a).",
"As NLP researchers, we are used to certain methodologies.",
"When it comes to questions about endangered languages, it is tempting for us to formulate the new problems we encounter as what we are familiar with.",
"However, we should always question ourselves: Is the formulation suitable for the language we conduct research on?",
"Are the methodologies we familiar with the only true ways to solve the problems?",
"Unquestioned focus on typical methodologies can make us treat languages as commodities and start to play a num-ber game (e.g., the size of the data) and forget the real problem, language revitalization, we intend to solve in the first place (Dobrin et al., 2007).",
"At every research step, it is critical to weigh the burden we put upon the speakers against the benefit that the research can bring back to their community.",
"If the research outcome conveys no new knowledge, information, or benefit to the community, it is no different from taking indigenous knowledge that has occurred over the centuries.",
"That is exactly why the word research is sometimes the direst (i.e., conjuring up bad memories) word in indigenous world's vocabulary (Smith, 1999).",
"Finally, it is important to carefully deal with copyright and data governance meanwhile, we advocate open-sourced and community-contributed works.",
"Build a Community.",
"Fundamentally, we want to work together with people from the indigenous communities (Bird, 2020a, 2021).",
"It is the most effective way to foster mutual understanding.",
"We should communicate with the indigenous people and get to know their priorities.",
"Common attitudes need to be fostered, common interests need to be found, and common goals need to be set up, before performing the research.",
"These all lead to a community.",
"We would imagine that there is an online community (a website) where native speakers can share their knowledge and language learners can find resources and learn the language together (see Section 3).",
"People can share resources and participant in machine-in-the-loop resource collection projects (see Section 5).",
"NLP researchers can evaluate and share their models in this community.",
"Entertaining language learning or resource collection games can be launched.",
"We hope the community can support wide and sustainable collaborations between indigenous speakers, language learners, and NLP practitioners.",
"Compared to local communities of the speakers, this community will be greatly supported by technologies.",
"A few NLP communities, e.g., MasakhaneNLP (focusing on African languages) and SIGEL (special interest group endangered languages), have been built.",
"Differently, the community we promote here will support both NLP research and language learning.",
"Lastly, compared to Telegram groups (we are in a few different Telegram groups with Cherokee community mem-bers), we want to build a more open community that everyone can have access to.",
"Since little inter-generation language transmission is happening, language education is an essential requirement of language revitalization.",
"Computer-assisted language learning has a long-standing history (Higgins, 1983) and two workshops, BEA 4 and NLP4CALL 5 , are held for research on applying NLP for language education.",
"Here, we discuss three ways in which NLP can potentially assist language education of endangered languages.",
"direct way, in which NLP can help, is automatically generating quizzes for language learners.",
"Practicing 4 https://aclanthology.org/venues/bea/ 5 https://aclanthology.org/venues/nlp4call/ 1531 and producing the language in questions are critical to language acquisition (Gass and Mackey, 2013).",
"Usually, language instructors manually design the quizzes, which is tedious and time-consuming not to mention, there are not a lot of instructors for endangered languages.",
"However, given the available text of endangered languages, NLP can easily and automatically generate cloze questions.",
"It can also help find distracting wrong answers that happen in a similar context and thus form multi-choice questions (Hill and Simha, 2016 Susanti et al., 2018).",
"To increase playfulness, language learning games, e.g., crossword puzzles and flashcards, can also be automatically generated (Rigutini et al., 2012 Xu and Inga-son, 2021).",
"Since these applications involve very basic language processing steps, NLP techniques can be reliably and easily applied.",
"Automated Assessment.",
"Another widely studied topic is NLP-supported automatic assessment.",
"Though a lot of advanced assessments, e.g., grammar error correction (Bryant et al., 2019), essay grading (Chen et al., 2016), are difficult to be applied for endangered languages, we argue that some easier assessments are feasible.",
"For example, automatic error analysis and template-based feedback can be provided for language learning quizzes.",
"Another challenging but feasible assessment is to assess the readability or difficulty of language learning materials to provide suitable learning plans for learners of different levels.",
"Using statistic and linguistic features, such as word frequency, morphology or syntactic complexity, etc., readability and difficulty can be automatically predicted (Schwarm and Ostendorf, 2005 Vajjala and Meurers, 2012).",
"However, basic NLP tools, like POS tagger, dependency parser, morphology analyzer, need to be developed before these applications can be realized.",
"The development of these tools requires small but highly-curated data (Blasi et al., 2021).",
"Community-based Language Learning.",
"Free online language learning platforms that integrate automated quiz generation and assessment have been developed, e.g., Oahpa (Uibo et al., 2015).",
"Taking one step further, we believe that a more effective approach of supporting endangered language education is to build an online and collaborative language learning platform, following the human computation technique (Von Ahn, 2008).",
"When using technologies to assist in language revitalization, we often face a dilemma.",
"On the one hand, due to the endangerment, there is not a lot of resources available and it is very expensive (in terms of time, effort, and cost) to collect resources from speakers.",
"On the other hand, machines struggle to reach useable and helpful performances without a decent amount of training data.",
"Human computation aims at combining human and computer to solve problems neither of them could solve alone (Von Ahn, 2008 Garcia, 2013).",
"The most famous example is Wikipedia where Internet users contribute their knowledge together, and incredibly high-quality content has been created.",
"Other successful cases are Duolingo and Tatoeba.",
"Both are for language learners to translate web text and rate each other's translations.",
"Then, the translated text can serve as learning materials and training data for NLP models.",
"However, Tatoeba only has an English interface, and mixes languages on the same site, making it hard to find peer learners of under-resourced languages.",
"Though Duolingo has language-specific sites, it supports 23 languages so far.",
"Therefore, how to make use of collaborative language learning platforms for endangered languages is a big challenge.",
"Nonetheless, we believe that it is a promising path to take for teaching endangered languages to the young generation in this information age.",
"Starting from this section, we illustrate the situation of endangered languages through the example of Cherokee.",
"We first review its history and linguistics.",
"In the NLP area, we hardly get to know the languages and often let the model learn statistical patterns automatically from the data.",
"However, it is critical to have basic knowledge of the language when contributing to its revitalization.",
"Tribal Sovereignty.",
"Before encountering Europeans, American Indians were already governing themselves.",
"By drafting treaties with indigenous nations, the colonial powers implicitly recognized their sovereignty.",
"Those treaties are still valid today, and tribal peoples are very much operating as sovereign nations, separate from the US (NCAI, 2020).",
"There are three federally recognized nations of Cherokee people: Cherokee Nation of Oklahoma (CN), United Keetoowah Band of Chero-1532 kee Indians (UKB), and Eastern Band of Cherokee Indians (EBCI).",
"Traditional Cherokee homeland covered parts of what are now eight US states.",
"6 EBCI is composed of those Cherokees who were able to remain in their homeland.",
"CN is largely comprised of the descendants of those who were forcibly removed to Indian Territory along the infamous Trail of Tears in 1838 (Perdue and Green, 2007), while the UKB is composed largely of those whose ancestors chose to remove themselves west of the Mississippi.",
"Although the three nations are politically independent, they all descend from the same Cherokee people, and maintain common interests, cultural elements, and language.",
"The Language and its Dialects.",
"Cherokee is the only surviving member of the Southern Iroquoian language family, which have separated from the Northern Iroquoian languages about 4,000 years ago ( Julian, 2010).",
"James Mooney identified three main dialects of Cherokee: the Overhill dialect, the Underhill dialect (has died out), and the Middle, or Kituwah dialect.",
"The Overhill dialect is primarily spoken in Oklahoma, and the Middle dialect is predominantly spoken in North Carolina today.",
"Although according to UNESCO, both dialects are endangered, Cherokee is comparatively well-reported among American Indian languages.",
"This is partially due to its writing system known as the 85-character Cherokee syllabary.",
"It was invented in the early 1820s by Sequoyah (Britannica, 2021).",
"The Cherokees have a newspaper written in their own language: the Cherokee Phoenix.",
"The Phoenix, alongside the Cherokee New Testament, formed cornerstones of the Cherokee language in the 1800s on which many current language preservations and archiving projects rest.",
"Language Endangerment.",
"Cherokee was robustly spoken until around the 1930s.",
"The primary factor being responsible is the US gov-ernment's civilization policy, which aimed to remove American Indians' cultural distinctions (Spring, 2016).",
"Federal boarding schools were created on the model of military institutions by Richard H. Pratt under the philosophy of kill the Indian, save the man (Pratt, 2013).",
"American Indian children were sent to residential schools to be educated in how to live in ways more similar to their white contemporaries.",
"School overseers cut their hair, forced them to abandon their traditional 6 North Carolina, South Carolina, Georgia, Kentucky, Tennessee, Alabama, Virginia, and West Virginia.",
"dress, and punished them for speaking their traditional languages.",
"Beyond the trauma, when they returned to communities, banks, post offices, factories, and grocery stores were all controlled nonlocally.",
"People working in them either no longer spoke Cherokee because they were not from Cherokee communities or because their employers were not Cherokee speakers.",
"This transition contributed to the decline of the language in daily use, until the first generation grew up with only English as the language of the home around 1950s (Gulick, 1958 Frey, 2013).",
"Recently, the larger project of language revitalization, of which this paper is a part, endeavors to return the language to regular day-to-day use in the Cherokee communities.",
"Polysynthetic.",
"Cherokee, like most American Indian languages, is polysynthetic.",
"This means that words are primarily composed of a root whose meaning is modified by multiple prefixes and suffixes.",
"The word , gega , can be divided up: g, -e, -ga .",
"The gprefix indicates that the subject of the verb is 1st person singular while the -ga suffix indicates that the action happens in the present tense and the aspect is progressive.",
"The verb root -econveys the idea of motion.",
"The simplest verb form in Cherokee will contain at minimum a root, a pronominal prefix, and a tense/aspect suffix.",
"One oft-noted aspect of Cherokee grammar is its classificatory system, wherein verbs with direct objects must conjugate to indicate the physical shape of the direct object.",
"The verb I have, for instance, could appear in any of the following ways: Agiha (I have (solid)), Agineha (I have (liquid)), Agwvya (I have (long & rigid)), Agina'a (I have (flexible)), Agikaha (I have (animate)).",
"Cherokee also has pre-pronominal prefixes that can specify the geographical location of particular events, such as wi-(translocative), which indicates that the action will happen at a distance away from the speaker, and di-(cislocative), which indicates the action will happen at a distance approaching the speaker.",
"Word Order.",
"Word order in Cherokee is dependent on the larger pragmatic context in which the sentence appears, with new information or timeframes occurring before the verb and old or established information occurring post-verbally.",
"Subject-object agreement is handled largely via the dual-argument pronominal prefixes.",
"E.g., in I see it, ( tsigowatiha ), the pronominal 1533 prefix tsiindicates 1st person singular (I) acting on 3rd person singular (it).",
"In ( agigowatiha ), we change tsito agi, which means 3rd person singular acting on 1st person singular.",
"Person & Number.",
"Although English has only two categories of number: singular and plural , Cherokee has a third, dual category.",
"Therefore, a verb in Cherokee can be conjugated in first, second, or third person and specified for either singular, dual, or plural subjects.",
"Dual and plural prefixes in the first person must then be further subdivided by clusivity, yielding 1st-person dual inclusive (you & I) or exclusive (she/he & I), 1st-person plural inclusive (all of us) or exclusive (they & I).",
"The second person can inflect for dual (you two) or plural (you all).",
"Cherokee does not have a third-person dual form, and speakers usually use the plural form when referring to two third persons.",
"Verb-centric.",
"Cherokee is very verb-centric, and verbs comprise 75% of Cherokee (Feeling, 1975).",
"Cherokee nouns are divided into root nouns (have no verbal inflection attached to them) and derived nouns (carry verbal morphology).",
"Similarly, Cherokee adjectives can be distinguished from verbs in that their forms cannot carry the tense/aspect morphology typical of actual verbs.",
"Thus, to say someone is skinny, ( ulesoda ) carries the pronominal prefix u, indicating 3rd person singular, while ( ulesoda gesv'i ) marks past tense by adding a separate copula (to be) that carries the tense/aspect suffix -v'i .",
"Evidentiality.",
"Cherokee is also marked by a system of evidentiality (indicating whether one has firsthand knowledge of past events, or if one is reporting on hearsay).",
"E.g., one might say ( agasgv'i ), it was raining (and I have firsthand knowledge of this) vs. ( agasge'i ), it was raining (from what I understand).",
"Interestingly, this phenomenon applies regardless of the assumed truth of the statement in question.",
"Phoneme.",
"Cherokee's phoneme inventory is, like other Iroquoian languages, almost completely bereft of bilabial sounds.",
"It entirely lacks the p or b phonemes, along with f / v , / , and any r sound.",
"It has six vowels: a , e , i , o , u , and v , and are generally pronounced with continental values, as in Spanish, except for v .",
"Consonant inventory is small, at only 13, and most will be familiar to English speakers.",
"The main exception is the voiceless alveolar fricative , likely more familiar to Icelandic speakers.",
"The availability of language resources is not only important for language education but also determines the development of NLP technologies.",
"Cherokee is categorized into The Scraping-Bys by Joshi et al. (2020), which means it has some amount of data but solid movements still need to be taken to increase the awareness of the language.",
"Existing Resources Online.",
"It is not easy to locate a lot of Cherokee resources on the Internet, compared to other high-resource languages.",
"Here, we point to a few places where high-quality Cherokee resources for language learning or NLP model training can be found: (1) Cherokee-English Dictionary 7 has online Cherokee-English dictionaries, a transliteration tool, a grammar guide, and a few Cherokee text or audio corpora (2) Cherokee Nation website 8 contains Cherokee online classes, learning materials, fonts and keyboards, etc. (3) UNC Cherokee Program website 9 has UNC Cherokee class resources and pointers to external re-sources (4) Cherokee Language Github group 10 gathers a lot of Cherokee text and audio data, as well as initial attempts for speech synthesis and some other NLP tools.",
"(5) The Cherokee Phoenix 11 publishes all-Cherokee issues as well as some bilingual articles with Cherokee audios.",
"12 (6) We released around 17K Cherokee-English parallel data (Zhang et al., 2020).",
"13 In addition, Cherokee Wikipedia is available but its content is noisy.",
"A Cherokee resource catalog can be built up in the future for easier locating resources.",
"Community-based Resource Collection.",
"Besides existing resources, we suggest collaborative resource collection, which can be integrated with the community-based language learning platform we introduced in Section 3.",
"A simple feature of this platform could be a dropbox where people who are willing to contribute their resources can drop in the files they have.",
"14 The back-end program can support any kind of data processing based on the contributor's request and permission.",
"Then, the 7 https://www.cherokeedictionary.net 8 https://language.cherokee.org 9 https://cherokee.web.unc.edu 10 https://github.com/CherokeeLanguage 11 https://www.cherokeephoenix.org 12 https://tinyurl.com/4nf9txkf 13 https://github.com/ZhangShiyue/ChrEn/tree/ main/data/parallel_01172022 14 An example can be found at https://cherokee.web.",
"resources can be shared back with the community as language learning and model training resources.",
"Second, for more complex data annotation tasks, like POS tagging, dependency parsing, we suggest setting up game with a purpose (GWAP) applications on this website.",
"GWAP is introduced by Luis Von Ahn (Von Ahn, 2006 Von Ahn and Dabbish, 2008) who is also the founder of Duolingo.",
"One famous example is his ESP game (Von Ahn and Dab-bish, 2004) which formulates the image recognition task as a game.",
"Following this idea, NLP practitioners can design diverse games on the platform to increase the fun and engagement of language learning and resource collection.",
"In addition, this platform will focus more on what kind of materials the Cherokee community members consider important to preserve instead of what the NLP researchers find most valuable.",
"Automatic Resource Mining.",
"As NLP practitioners, we should try to make the most use of computers for collecting resources automatically.",
"A lot of automatic data mining methods have been proposed to mine monolingual or bilingual text from the noisy web or Wikipedia (Guo et al., 2018 Artetxe and Schwenk, 2019 Schwenk et al., 2019 Wenzek et al., 2020 Schwenk et al., 2021 Arkhangelskiy, 2019).",
"Though the mined text has many errors or noises, previous works demonstrate that neural NLP models are surprisingly good at using noisy data for training.",
"However, some additional NLP components, like language identifier and multilingual embeddings, need to be developed to support the data mining.",
"For instance, to mine Cherokee-English parallel text, we will need to map English and Cherokee sentences to the same representation space to compute their similarity.",
"However, existing tools of getting multilingual sentence embeddings, like LASER, 15 do not support Cherokee, and Cherokee is not related to or sharing scripts with any supported languages.",
"But, given the existing Cherokee-English parallel data (Zhang et al., 2020), we can re-train these tools and have Cherokee being supported.",
"Note that these automatic text miners can start with both crawled web text and OCR-processed text (Section. 6.2).",
"to have and hold the potentials to be useful in Cherokee language revitalization.",
"Thus, some initial attempts have been made by the Cherokee Language Github group and us (Zhang et al., 2020, 2021).",
"Hence, we dive deep into several specific NLP tools for Cherokee language processing in this section.",
"And for any NLP tool we develop, we want to evaluate it by the Cherokee speakers, and we suggest open-sourcing it for free usage.",
"Connecting to our build a community proposal, we hope that NLP models can also be shared and used widely and sustainably in the community.",
"Ideally, a good machine translation (MT) system can automatically translate the big amount of English text to Cherokee or it can assist human translators.",
"Dr. David Montgomery, a citizen of Cherokee Nation and a Cherokee language learner, commented on MT: It would be a great service to Cherokee language learners to have a translation tool as well as an ability to draft a translation of documents for first-language Cherokee speakers to edit as part of their translation tasks. If these tools can be made to work accurately, they would be transformative for the Cherokee language.",
"Previously, we collected parallel text and developed an MT online translation demo between Cherokee and English (Zhang et al., 2020, 2021).",
"However, our system can translate fragments of the source sentence but make major mistakes , which is far from being practically useful.",
"The first challenge of MT development is the lack of data.",
"Automatic data mining can help enrich MT training data (Section 5).",
"But we still need high-quality and diverse evaluation data because existing evaluation sets (Zhang et al., 2020) are from limited domains (the majority is the Bible).",
"Recently, Flo-res101, an MT evaluation benchmark covering 101 languages, has been created (Goyal et al., 2021).",
"Though it has not yet covered Cherokee, we hope it can happen in the future.",
"The second challenge is processing and producing Cherokee text.",
"Cherokee has rich morphology (see Section 4.2).",
"One Cherokee word can be translated into one English sentence.",
"Intuitively, we would think subword tokenization (Sennrich et al., 2016 Kudo, 2018) is helpful.",
"However, previously, we (Zhang et al., 2020) showed that applying subword tokenization for English to Cherokee translation is harmful.",
"We argue that it is be-1535 OCR tools Original Screenshot WER CER WER CER Tesseract 0.355 0.230 0.151 0.063 Google Vision 0.533 0.199 0.468 0.074 Table 1: OCR performance of two OCR tools on our evaluation sets.",
"cause we processed Cherokee text in its syllabary rather than in transliterated Latin script, however, morphemes are easier to be learned from the latter.",
"E.g., in , tsaquadvsidisv (when I was growing up), the prefix tsmarks relative clauses, but is tsa .",
"We suspect that character-level generation (in Latin script) would work better for Cherokee.",
"Additionally, Cherokee has flexible word order that is often determined by whether the information is new or old in relation to the larger discourse (Section 4.2).",
"Thus, document-level translations are more reasonable than typical sentence-level translations.",
"The majority of Cherokee text is in the format of manuscripts or books, so as many other endangered languages (Joshi et al., 2020 Bustamante et al., 2020).",
"Though humans can read them, they are not machine-readable, which restricts the flexibility of their use, e.g., automatically creating language learning quizzes.",
"Optical character recognition (OCR) (Smith, 2007) can help extract plain text from PDFs or images.",
"Fortunately, existing OCR tools, like Tesseract-OCR 16 and Google Vision OCR API 17 , support Cherokee and have decent accuracy.",
"However, OCR accuracy is highly influenced by image quality.",
"If the image has a noisy background or the text is surrounded by colorful pictures (which often happens in children books), the OCR accuracy will drop significantly.",
"To prove this, we create two evaluation sets from Cherokee books (including Cherokee New Testament, children books, Cherokee narratives): (1) Original has 20 images, and each image is one complete page from a book (2) Screenshot is obtained by manually conducting screenshots and cutting out text from the 20 images, i.e., removing background noises.",
"For each image in two sets, we manually annotate the corresponding text.",
"Table 1 shows the results of Tesseract-OCR and 16 https://github.com/tesseract-ocr/ 17 https://cloud.google.com/vision/docs/ocr audio to phonetic text audio to syllabic text WER 0.64 0.21 Table 2: The ASR results of finetuned XLSR-53 (Con-neau et al., 2020) models.",
"Google Vision OCR API.",
"Both OCR tools achieve significantly lower error rates on the Screenshot set than on the Original set, which demonstrates the importance of cleaning the images.",
"Tesseract-OCR shows better performance than Google Vision OCR, especially it is better at detecting word boundaries.",
"Although ways to improve image quality are available, 18 an easy-to-use tool need to be developed.",
"OCR post-correction methods can also be applied (Rijhwani et al., 2020).",
"Automatic speech recognition (ASR) (Povey et al., 2011) can help language documentation, though indigenous community members may prefer unassisted transcription (Prud'hommeaux et al., 2021).",
"Moreover, ASR holds the potential to automatically transcript audio data and thus enrich text corpus.",
"A good amount of Cherokee audio data can be found from the Cherokee Voices, Cherokee Sounds radio, Cherokee Phoenix, and recorded meetings.",
"ASR can automatically transcript these audios to produce valuable Cherokee text data.",
"Recently, models that are first pre-trained on audio data and then finetuned on audio-text data have shown great advantages in performing ASR (Baevski et al., 2020).",
"Especially, Conneau et al. (2020) pretrain and finetune a model on 53 languages and release XLSR-53 (supports ASR for 53 languages).",
"It shows reasonable generalizability to unseen and low-resource languages.",
"This sheds light on developing ASR for endangered languages.",
"Hence, we test its performance for Cherokee ASR.",
"Using the audio-text data open-sourced 19 or shared privately by Michael Conrad, we build two ASR models: (1) audio to phonetic text, (2) audio to syllabic text.",
"See more details in Appendix A.1.",
"As shown in Table 2, we get surprisingly good performances, especially for the audio-to-syllabic-text model.",
"20 This is very promising, 18 https://tinyurl.com/29xnewu9 19 https://github.com/CherokeeLanguage/ cherokee-audio-data 20 The same model finetuned on CommonVoice's Turkish data gets WER=0.35.",
"especially when knowing the fact that more self-training strategies can be applied, e.g., pretrain the speech encoder with Cherokee audio data, and more audio-text training data can be compiled.",
"Text-to-speech synthesis (TTS) is more difficult to develop than ASR nevertheless, following the pretrain-then-finetune paradigm, TTS models for extremely low-resource languages have been introduced (Xu et al., 2020).",
"Tokenization is an essential pre-processing step of most NLP models, and it is related to morphology parsing.",
"Subword tokenization has become de facto (Sennrich et al., 2016 Kudo, 2018).",
"It segments a word into frequent subwords, and subwords are supposed to align with morphemes.",
"Better alignment with morphemes can lead to better downstream performance (Bostrom and Durrett, 2020), while current subword tokenization methods struggle to perform well in morphologically rich languages (Amrhein and Sennrich, 2021).",
"Here, we evaluate how well subword tokenization can learn real morphemes for Cherokee.",
"We train two subword tokenizers, 21 Unigram LM (Kudo, 2018) and BPE (Sennrich et al., 2016), and one morphology parser, Morfessor (Smit et al., 2014), on our previous MT training set (Zhang et al., 2020).",
"Instead of using the original syllabic text, we transliterate text into Latin script to make it easier to learn morphemes.",
"We collect gold (expert-labeled) morphemes of 372 Cherokee words from Cherokee Narratives (Feeling, 2018).",
"Then, we use the pretrained tokenizers or parser to tokenize these 372 words and evaluate the alignment between subwords and gold morphemes.",
"As shown in Table 3, subwords are poorly aligned with gold morphemes.",
"Nonetheless, Unigram LM (Kudo, 2018) demonstrates better ability of inducing morphemes, which is consistent with the observation made by Bostrom and Durrett (2020).",
"We think better representation methods need to be in-21 We use SentencePiece (Kudo and Richardson, 2018).",
"More basic NLP tools like POS tagger and dependency parser are under-developed for Cherokee.",
"These tools can not only support the development of other NLP tools but also be used to predict the readability of language learning materials (Section 3).",
"Moreover, data for these tasks can serve as language learning materials for understanding Cherokee linguistics.",
"Though unsupervised methods have been proposed (Stratos et al., 2016 Kim et al., 2019), usually small but high-quality labeled data, like Universal Dependencies (Nivre et al., 2016), is needed (Blasi et al., 2021).",
"Therefore, data annotation by experts is required and community-based data collection strategies can be applied (Section 5).",
"Moreover, the parallel English data and English tagger/parser can assist the annotation on the Cherokee side, which will also produce English-Cherokee word/phrase-level alignments as by-products.",
"These alignments are valuable Cherokee language education resources, e.g., asking students when you have structure X in English, what is the corresponding structure Y in Cherokee?",
"In this work, we discuss how NLP can help revitalize endangered languages.",
"We first suggest general principles to NLP practitioners and propose ways of NLP-assisted language education.",
"Especially, we promote building a (online) community that support collaborative language learning, resource collection, and knowledge sharing.",
"Second, we conduct a case study for Cherokee (a severely-endangered Native American language).",
"After reviewing Cherokee history and linguistics, we propose two methods of enriching Cherokee resources and discuss the developments of several NLP models that people from the Cherokee community are interested in.",
"We hope our work can encourage future work to think and plan the path forward for other endangered languages.",
"In the future, we hope to broaden our collaboration to even more Cherokee community members and build meaningful relationships with tribal governments, so that we can develop more useful applications through NLP techniques for supporting Cherokee revitalization.",
"The content of this paper is based on and inspired by our practice in Cherokee Language Revitalization.",
"The conclusions and suggestions may or may not generalize to other endangered languages.",
"For example, since Cherokee has its own syllabary and can be written down, we are interested in speech recognition for audio transcription.",
"Even though some methods can directly translate audio to text of another language, we do not want to skip the transcription step.",
"However, for some oral languages, they may want to prioritize translation over transcription to tackle the transcription bottleneck (Bird, 2020b).",
"On the other hand, our position is influenced by Crystal (2014), who thinks using electronic technology is important for language revitalization.",
"Therefore, a lot of our proposals, like building an online community, may have an assumption that computers and the Internet have been or can be widely accepted and used in the indigenous community.",
"However, it may not be true in every indigenous community.",
"We thank the reviewers for their helpful comments.",
"We thank Archiki Prasad and Zhiyuan Tang for providing guidance on developing ASR models.",
"We thank Michael Conrad for providing Cherokee audios and transcriptions.",
"We thank David Montgomery and Eva Marie Garroutte for providing their statements.",
"We thank the Kituwah Preservation and Education Program (KPEP), the Eastern Band of Cherokee Indians, and the Cherokee Nation.",
"This work was supported by NSF-CAREER Award 1846185, ONR Grant N00014-18-1-2871, NSF-AI Engage Institute DRL-2112635, and a Bloomberg Data Science Ph.D.",
"Fellowship.",
"The views contained in this article are those of the authors and not of the funding agency."
] | [
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"method",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"We introduce Dynabench, an open-source platform for dynamic dataset creation and model benchmarking.",
"Dynabench runs in a web browser and supports human-and-model-in-the-loop dataset creation: annotators seek to create examples that a target model will misclassify, but that another person will not.",
"In this paper, we argue that Dynabench addresses a critical need in our community: contemporary models quickly achieve outstanding performance on benchmark tasks but nonetheless fail on simple challenge examples and falter in real-world scenarios.",
"With Dynabench, dataset creation, model development, and model assessment can directly inform each other, leading to more robust and informative benchmarks.",
"We report on four initial NLP tasks, illustrating these concepts and highlighting the promise of the platform, and address potential objections to dynamic benchmarking as a new standard for the field.",
"While it used to take decades for machine learning models to surpass estimates of human performance on benchmark tasks, that milestone is now routinely reached within just a few years for newer datasets (see Figure 1).",
"As with the rest of AI, NLP has advanced rapidly thanks to improvements in computational power, as well as algorithmic breakthroughs, ranging from attention mechanisms (Bah-danau et al., 2014; Luong et al., 2015), to Transformers (Vaswani et al., 2017), to pre-trained language models (Howard and Ruder, 2018; Devlin et al., 2019; Liu et al., 2019b; Radford et al., 2019; Brown et al., 2020).",
"Equally important has been the rise of benchmarks that support the development of ambitious new data-driven models and that encourage apples-to-apples model comparisons.",
"Benchmarks provide a north star goal for researchers, and 2000 2005 2010 2015 2020 1.0 0.8 0.6 0.4 0.2 0.0 0.2 MNIST GLUE ImageNet SQuAD 1.1 SQuAD 2.0 Switchboard Figure 1: Benchmark saturation over time for popular benchmarks, normalized with initial performance at minus one and human performance at zero.",
"are part of the reason we can confidently say we have made great strides in our field.",
"In light of these developments, one might be forgiven for thinking that NLP has created models with human-like language capabilities.",
"Practitioners know that, despite our progress, we are actually far from this goal.",
"Models that achieve super-human performance on benchmark tasks (ac-cording to the narrow criteria used to define human performance) nonetheless fail on simple challenge examples and falter in real-world scenarios.",
"A substantial part of the problem is that our benchmark tasks are not adequate proxies for the sophisticated and wide-ranging capabilities we are targeting: they contain inadvertent and unwanted statistical and social biases that make them artifi-cially easy and misaligned with our true goals.",
"We believe the time is ripe to radically rethink benchmarking.",
"In this paper, which both takes a position and seeks to offer a partial solution, we introduce Dynabench, an open-source, web-based research platform for dynamic data collection and model benchmarking.",
"The guiding hypothesis behind Dynabench is that we can make even faster progress if we evaluate models and collect data dynamically, with humans and models in the loop, rather than the traditional static way.",
"Concretely, Dynabench hosts tasks for which we dynamically collect data against state-of-the-art models in the loop, over multiple rounds.",
"The stronger the models are and the fewer weaknesses they have, the lower their error rate will be when interacting with humans, giving us a concrete metric i.e., how well do AI systems perform when interacting with humans?",
"This reveals the shortcomings of state-of-the-art models, and it yields valuable training and assessment data which the community can use to develop even stronger models.",
"In this paper, we first document the background that led us to propose this platform.",
"We then describe the platform in technical detail, report on findings for four initial tasks, and address possible objections.",
"We finish with a discussion of future plans and next steps.",
"Progress in NLP has traditionally been measured through a selection of task-level datasets that gradually became accepted benchmarks (Marcus et al., 1993; Pradhan et al., 2012).",
"Recent well-known examples include the Stanford Sentiment Tree-bank (Socher et al., 2013), SQuAD (Rajpurkar et al., 2016, 2018), SNLI (Bowman et al., 2015), and MultiNLI (Williams et al., 2018).",
"More recently, multi-task benchmarks such as SentE-val (Conneau and Kiela, 2018), DecaNLP (McCann et al., 2018), GLUE (Wang et al., 2018), and Super-GLUE (Wang et al., 2019) were proposed with the aim of measuring general progress across several tasks.",
"When the GLUE dataset was introduced, solving GLUE was deemed beyond the capability of current transfer learning methods (Wang et al., 2018).",
"However, GLUE saturated within a year and its successor, SuperGLUE, already has models rather than humans at the top of its leaderboard.",
"These are remarkable achievements, but there is an extensive body of evidence indicating that these models do not in fact have the human-level natural language capabilities one might be lead to believe.",
"Whether our models have learned to solve tasks in robust and generalizable ways has been a topic",
"of much recent interest.",
"Challenging test sets have shown that many state-of-the-art NLP models struggle with compositionality (Nie et al., 2019; Kim and Linzen, 2020; Yu and Ettinger, 2020; White et al., 2020), and find it difficult to pass the myriad stress tests for social (Rudinger et al., 2018; May et al., 2019; Nangia et al., 2020) and/or linguistic competencies (Geiger et al., 2018; Naik et al., 2018; Glockner et al., 2018; White et al., 2018; Warstadt et al., 2019; Gauthier et al., 2020; Hossain et al., 2020; Jeretic et al., 2020; Lewis et al., 2020; Saha et al., 2020; Schuster et al., 2020; Sugawara et al., 2020; Warstadt et al., 2020).",
"Yet, challenge sets may suffer from performance instability (Liu et al., 2019a; Rozen et al., 2019; Zhou et al., 2020) and often lack sufficient statistical power (Card et al., 2020), suggesting that, although they may be valuable assessment tools, they are not sufficient for ensuring that our models have achieved the learning targets we set for them.",
"Models are susceptible to adversarial attacks, and despite impressive task-level performance, state-of-the-art systems still struggle to learn robust representations of linguistic knowledge (Ettinger et al., 2017), as also shown by work analyzing model diagnostics (Ettinger, 2020; Ribeiro et al., 2020).",
"For example, question answering models can be fooled by simply adding a relevant sentence to the passage (Jia and Liang, 2017).",
"Text classification models have been shown to be sensitive to single input character change (Ebrahimi et al., 2018b) and first-order logic inconsistencies (Minervini and Riedel, 2018).",
"Similarly, machine translation systems have been found susceptible to character-level perturbations (Ebrahimi et al., 2018a) and synthetic and natural noise (Belinkov and Bisk, 2018; Khayrallah and Koehn, 2018).",
"Natural language inference models can be fooled by simple syntactic heuristics or hypothesis-only biases (Gururangan et al., 2018; Poliak et al., 2018; Tsuchiya, 2018; Belinkov et al., 2019; McCoy et al., 2019).",
"Dialogue models may ignore perturbations of dialogue history (Sankar et al., 2019).",
"More generally, Wallace et al. (2019) find universal adversarial perturbations forcing targeted model errors across a range of tasks.",
"Recent work has also focused on evaluating model diagnostics through counterfactual augmentation (Kaushik et al., 2020), decision boundary analysis (Gardner et al., 2020; Swayamdipta et al., 2020), and behavioural testing (Ribeiro et al., 2020).",
"Research progress has traditionally been driven by a cyclical process of resource collection and architectural improvements.",
"Similar to Dynabench, recent work seeks to embrace this phenomenon, addressing many of the previously mentioned issues through an iterative human-and-model-in-the-loop annotation process (Yang et al., 2017; Dinan et al., 2019; Chen et al., 2019; Bartolo et al., 2020; Nie et al., 2020), to find unknown unknowns (Atten-berg et al., 2015) or in a never-ending or life-long learning setting (Silver et al., 2013; Mitchell et al., 2018).",
"The Adversarial NLI (ANLI) dataset (Nie et al., 2020), for example, was collected with an adversarial setting over multiple rounds to yield a moving post' dynamic target for NLU systems, rather than a static benchmark that will eventually saturate.",
"In its few-shot learning mode, GPT-3 barely shows signs of life (Brown et al., 2020) (i.e., it is barely above random) on ANLI, which is evidence that we are still far away from human performance on that task.",
"While crowdsourcing has been a boon for large-scale NLP dataset creation (Snow et al., 2008; Munro et al., 2010), we ultimately want NLP systems to handle natural data (Kwiatkowski et al., 2019) and be ecologically valid (de Vries et al., 2020).",
"Ethayarajh and Jurafsky (2020) analyze the distinction between what leaderboards incentivize and what is useful in practice through the lens of microeconomics.",
"A natural setting for exploring these ideas might be dialogue (Hancock et al., 2019; Shuster et al., 2020).",
"Other works have pointed out misalignments between maximum-likelihood training on i.i.d. train/test splits and human language (Linzen, 2020; Stiennon et al., 2020).",
"We think there is widespread agreement that something has to change about our standard evaluation paradigm and that we need to explore alternatives.",
"The persistent misalignment between benchmark performance and performance on challenge and adversarial test sets reveals that standard evaluation paradigms overstate the ability of our models to perform the tasks we have set for them.",
"Dynabench offers one path forward from here, by allowing researchers to combine model development with the stress-testing that needs to be done to achieve true robustness and generalization.",
"Dynabench is a platform that encompasses different tasks .",
"Data for each task is collected over multiple rounds , each starting from the current state of the art.",
"In every round, we have one or more target models in the loop.",
"These models interact with humans, be they expert linguists or crowdworkers, who are in a position to identify models' shortcomings by providing examples for an optional context .",
"Examples that models get wrong, or struggle with, can be validated by other humans to ensure their correctness.",
"The data collected through this process can be used to evaluate state-of-the-art models, and to train even stronger ones, hopefully creating a virtuous cycle that helps drive progress in the field.",
"Figure 2 provides a sense of what the example creation interface looks like.",
"As a large-scale collaborative effort, the platform is meant to be a platform technology for human-and-model-in-the-loop evaluation that belongs to the entire community.",
"In the current iteration, the platform is set up for dynamic adversarial data collection, where humans can attempt to find model-fooling examples.",
"This design choice is due to the fact that the average case, as measured by maximum likelihood training on i.i.d. datasets, is much less interesting than the worst (i.e., adversarial) case, which is what we want our systems to be able to handle if they are put in critical systems where they interact with humans in real-world settings.",
"However, Dynabench is not limited to the adversarial setting, and one can imagine scenarios where humans are rewarded not for fooling a model or ensemble of models, but for finding examples that models, even if they are right, are very uncertain about, perhaps in an active learning setting.",
"Similarly, the paradigm is perfectly compatible with collaborative settings that utilize human feedback, or even negotiation.",
"The crucial aspect of this proposal is the fact that models and humans interact live in the loop for evaluation and data collection.",
"One of the aims of this platform is to put expert linguists center stage.",
"Creating model-fooling examples is not as easy as it used to be, and finding interesting examples is rapidly becoming a less trivial task.",
"In ANLI, the verified model error rate for crowd workers in the later rounds went below 1-in-10 (Nie et al., 2020), while in Beat the AI, human performance decreased while time per valid adversarial example went up with stronger models in the loop (Bartolo et al., 2020).",
"For expert linguists, we Figure 2: The Dynabench example creation interface for sentiment analysis with illustrative example.",
"expect the model error to be much higher, but if the platform actually lives up to its virtuous cycle promise, that error rate will go down quickly.",
"Thus, we predict that linguists with expertise in exploring the decision boundaries of machine learning models will become essential.",
"While we are primarily motivated by evaluating progress, both ANLI and Beat the AI show that models can overcome some of their existing blind spots through adversarial training.",
"They also find that best model performance is still quite far from that of humans, suggesting that while the collected data appears to lie closer to the model decision boundaries, there still exist adversarial examples beyond the remit of current model capabilities.",
"Dynabench offers low-latency, real-time feedback on the behavior of state-of-the-art NLP models.",
"The technology stack is based on PyTorch (Paszke et al., 2019), with models served via TorchServe.",
"1 1 https://pytorch.org/serve The platform not only displays prediction probabilities, but through an inspect model functionality, allows the user to examine the token-level layer integrated gradients (Sundararajan et al., 2017), obtained via the Captum interpretability library.",
"2 For each example, we allow the user to explain what the correct label is, as well as why they think it fooled a model if the model got it wrong; or why the model might have been fooled if it wasn't.",
"All collected model-fooling (or, depending on the task, even non-model-fooling) examples are verified by other humans to ensure their validity.",
"Task owners can collect examples through the web interface, by engaging with the community, or through Mephisto, 3 which makes it easy to connect, e.g., Mechanical Turk workers to the exact same backend.",
"All collected data will be open sourced, in an anonymized fashion.",
"In its current mode, Dynabench could be de-2 https://captum.ai/ 3 https://github.com/facebookresearch/ Mephisto scribed as a fairly conservative departure from the status quo.",
"It is being used to develop datasets that support the same metrics that drive existing benchmarks.",
"The crucial change is that the datasets are now dynamically created, allowing for more kinds of evaluatione.g., tracking progress through rounds and across different conditions.",
"We have selected four official tasks as a starting point, which we believe represent an appropriate cross-section of the field at this point in time.",
"Natural Language Inference (NLI) and Question Answering (QA) are canonical tasks in the field.",
"Sentiment analysis is a task that some consider solved (and is definitely treated as such, with all kinds of ethically problematic repercussions), which we show is not the case.",
"Hate speech is very important as it can inflict harm on people, yet classifying it remains challenging for NLP.",
"Natural language inference.",
"Built upon the semantic foundation of natural logic (Snchez Valencia, 1991, i.a.) and hailing back much further (van Benthem, 2008), NLI is one of the quintessential natural language understanding tasks.",
"NLI, also known as recognizing textual entailment' (Dagan et al., 2006), is often formulated as a 3-way classification problem where the input is a context sentence paired with a hypothesis, and the output is a label (entailment, contradiction, or neutral) indicating the relation between the pair.",
"We build on the ANLI dataset (Nie et al., 2020) and its three rounds to seed the Dynabench NLI task.",
"During the ANLI data collection process, the annotators were presented with a context (extracted from a pre-selected corpus) and a desired target label, and asked to provide a hypothesis that fools the target model adversary into misclassifying the example.",
"If the target model is fooled, the annotator was invited to speculate about why, or motivate why their example was right.",
"The target model of the first round (R1) was a single BERT-Large model fine-tuned on SNLI and MNLI, while the target model of the second and third rounds (R2, R3) was an ensemble of RoBERTa-Large models fine-tuned on SNLI, MNLI, FEVER (Thorne et al., 2018) recast as NLI, and all of the ANLI data collected prior to the corresponding round.",
"The contexts for Round 1 and Round 2 were Wikipedia passages curated in Yang et al. (2018) and the contexts for Round 3 were from various domains.",
"Results indicate that state-of-the-art models (which can obtain 90%+ accuracy on SNLI and MNLI) cannot exceed 50% accuracy on rounds 2 and 3.",
"With the launch of Dynabench, we have started collection of a fourth round, which has several innovations: not only do we select candidate contexts from a more diverse set of Wikipedia featured articles but we also use an ensemble of two different models with different architectures as target adversaries to increase diversity and robustness.",
"Moreover, the ensemble of adversaries will help mitigate issues with creating a dataset whose distribution is too closely aligned to a particular target model or architecture.",
"Additionally, we are collecting two types of natural language explanations: why an example is correct and why a target model might be wrong.",
"We hope that disentangling this information will yield an additional layer of interpretability and yield models that are as least as explainable as they are robust.",
"Question answering.",
"The QA task takes the same format as SQuAD1.1 (Rajpurkar et al., 2016), i.e., given a context and a question, extract an answer from the context as a continuous span of text.",
"The first round of adversarial QA (AQA) data comes from Beat the AI (Bartolo et al., 2020).",
"During annotation, crowd workers were presented with a context sourced from Wikipedia, identical to those in SQuAD1.1, and asked to write a question and select an answer.",
"The annotated answer was compared to the model prediction using a word-overlap F 1 threshold and, if sufficiently different, considered to have fooled the model.",
"The target models in round 1 were BiDAF (Seo et al., 2017), BERT-Large, and RoBERTa-Large.",
"The model in the loop for the current round is RoBERTa trained on the examples from the first round combined with SQuAD1.1.",
"Despite the super-human performance achieved on SQuAD1.1, machine performance is still far from humans on the current leaderboard.",
"In the current phase, we seek to collect rich and diverse examples, focusing on improving model robustness through generative data augmentation, to provide more challenging model adversaries in this constrained task setting.",
"We should emphasize that we don't consider this task structure representative of the broader defi-nition even of closed-domain QA, and are looking to expand this to include unanswerable questions (Rajpurkar et al., 2018), longer and more complex passages, Yes/No questions and multi-span answers (Kwiatkowski et al., 2019), and numbers, dates and spans from the question (Dua et al., 2019) as model performance progresses.",
"Sentiment analysis.",
"The sentiment analysis project is a multi-pronged effort to create a dynamic benchmark for sentiment analysis and to evaluate some of the core hypotheses behind Dynabench.",
"Potts et al. (2020) provide an initial report and the first two rounds of this dataset.",
"The task is structured as a 3-way classification problem: positive, negative, and neutral.",
"The motivation for using a simple positive/negative dichotomy is to show that there are still very challenging phenomena in this traditional sentiment space.",
"The neutral category was added to avoid (and helped trained models avoid) the false presupposition that every text conveys sentiment information (Pang and Lee, 2008).",
"In future iterations, we plan to consider additional dimensions of sentiment and emotional expression (Alm et al., 2005; Neviarouskaya et al., 2010; Wiebe et al., 2005; Liu et al., 2003; Sudhof et al., 2014).",
"In this first phase, we examined the question of how best to elicit examples from workers that are diverse, creative, and naturalistic.",
"In the prompt condition, we provide workers with an actual sentence from an existing product or service review and ask them to edit it so that it fools the model.",
"In the no prompt condition, workers try to write original sentences that fool the model.",
"We find that the prompt condition is superior: workers generally make substantial edits, and the resulting sentences are more linguistically diverse than those in the no prompt condition.",
"In a parallel effort, we also collected and validated hard sentiment examples from existing corpora, which will enable another set of comparisons that will help us to refine the Dynabench protocols and interfaces.",
"We plan for the dataset to continue to grow, probably mixing attested examples with those created on Dynabench with the help of prompts.",
"With these diverse rounds, we can address a wide range of question pertaining to dataset artifacts, domain transfer, and overall robustness of sentiment analysis systems.",
"Hate speech detection.",
"The hate speech task classifies whether a statement expresses hate against a protected characteristic or not.",
"Detecting hate is notoriously difficult given the important role played by context and speaker (Leader Maynard and Benesch, 2016) and the variety of ways in which hate can be expressed (Waseem et al., 2017).",
"Few high-quality, varied and large training datasets are available for training hate detection systems (Vidgen and Derczynski, 2020; Poletto et al., 2020; Vidgen et al., 2019).",
"We organised four rounds of data collection and model training, with preliminary results reported in Vidgen et al. (2020).",
"In each round, annotators are tasked with entering content that tricks the model into giving an incorrect classification.",
"The content is created by the annotators and as such is synthetic in nature.",
"At the end of each round the model is retrained and the process is repeated.",
"For the first round, we trained a RoBERTa model on 470,000 hateful and abusive statements 4 .",
"For subsequent rounds the model was trained on the original data plus content from the prior rounds.",
"Due to the complexity of online hate, we hired and trained analysts rather than paying for crowd-sourced annotations.",
"Each analyst was given training, support, and feedback throughout their work.",
"In all rounds annotators provided a label for whether content is hateful or not.",
"In rounds 2, 3 and 4, they also gave labels for the target (i.e., which group has been attacked) and type of statement (e.g., derogatory remarks, dehumanization, or threatening language).",
"These granular labels help to investigate model errors and improve performance, as well as directing the identification of new data for future entry.",
"For approximately half of entries in rounds 2, 3 and 4, annotators created perturbations where the text is minimally adjusted so as to flip the label (Gardner et al., 2020; Kaushik et al., 2020).",
"This helps to identify decision boundaries within the model, and minimizes the risk of overfitting given the small pool of annotators.",
"Over the four rounds, content becomes increasingly adversarial (shown by the fact that target models have lower performance on later rounds' data) and models improve (shown by the fact that the model error rate declines and the later rounds' models have the highest accuracy on each round).",
"We externally validate performance using the HATECHECK suite of diagnostic tests from Rttger et al. (2020).",
"We show substantial improvement over the four rounds, and our final round target model achieves 94% on HATECHECK , outperforming the models presented by the original authors.",
"Table 1 shows an overview of the current situation for the four tasks.",
"Some tasks are further along in their data collection efforts than others.",
"As we can see, the validated model error rate (vMER; the number of human-validated model errors divided by the total number of examplesnote that the error rates are not necessarily comparable across tasks, since the interfaces and in-the-loop models are not identical) is still very high across all tasks, clearly demonstrating that NLP is far from solved.",
"There are several obvious and valid objections one can raise.",
"We do not have all the answers, but we can try to address some common concerns.",
"distributional shift?",
"Yes, that is a real risk.",
"First, we acknowledge that crowdsourced texts are likely to have unnatural qualities: the setting itself is artificial from the perspective of genuine communication, and crowdworkers are not representative of the general population.",
"Dynabench could exacerbate this, but it also has features that can help alleviate it.",
"For instance, as we discussed earlier, the sentiment analysis project is using naturalistic prompt sentences to try to help workers create more diverse and naturalistic data.",
"Second, if we rely solely on dynamic adversarial collection, then we increase the risks of creating unnatural datasets.",
"For instance, Bartolo et al. (2020) show that training solely on adversarially-collected data for QA was detrimental to performance on non-adversarially collected data.",
"However, they also show that models are capable of simultaneously learning both distributions when trained on the combined data, retaining if not slightly improving performance on the original distribution (of course, this may not hold if we have many more examples of one particular kind).",
"Ideally, we would combine adversarially collected data with non-adversarialpreferably naturally collected data, so as to capture both the average and worst case scenarios in our evaluation.",
"Finally, we note that Dynabench could enable the community to explore the kinds of distributional shift that are characteristic of natural languages.",
"Words and phrases change their meanings over time, between different domains, and even between different interlocutors.",
"Dynabench could be a tool for studying such shifts and finding models that can succeed on such phenomena.",
"What if annotators overfit on models?",
"A potential risk is cyclical progress, where improved models forget things that were relevant in earlier rounds because annotators focus too much on a particular weakness.",
"Continual learning is an exciting research direction here: we should try to understand distributional shift better, as well as how to characterize how data shifts over time might impact learning, and how any adverse effects might be overcome.",
"Because of how most of us have been trained, it is natural to assume that the last round is automatically the best evaluation round, but that does not mean that it should be the only round: in fact, most likely, the best way to evaluate progress is to evaluate on all rounds as well as any high-quality static test set that exists, possibly with a recency-based discount factor.",
"To make an analogy with software testing, similar to checklists (Ribeiro et al., 2020), it would be a bad idea to throw away old tests just because you've written some new ones.",
"As long as we factor in previous rounds, Dynabench's dynamic nature offers a way out from forgetting and cyclical issues: any model biases will be fixed in the limit by annotators exploiting vulnerabilities.",
"Another risk is that the data distribution might be too heavily dependent on the target model in the loop.",
"When this becomes an issue, it can be mitigated by using ensembles of many different architectures in the loop, for example the top current state-of-the-art ones, with multiple seeds.",
"5 How do we account for future, not-yet-in-the-loop models?",
"Obviously, we can'tso this is a very valid criticism.",
"However, we can assume that an ensemble of model architectures is a reasonable approximation, if and only if the models are not too bad at their task.",
"This latter point is crucial: we 5 ANLI does not show dramatically different results across models, suggesting that this is not necessarily a big problem yet, but it shows in R2 and R3 that ensembles are possible.",
"take the stance that models by now, especially in aggregate, are probably good enough to be reasonably close enough to the decision boundariesbut it is definitely true that we have no guarantees that this is the case.",
"How do we compare results if the benchmark keeps changing?",
"This is probably the main hurdle from a community adoption standpoint.",
"But if we consider, e.g., the multiple iterations of Se-mEval or WMT datasets over the years, we've already been handling this quite wellwe accept that a model's BLEU score on WMT16 is not comparable to WMT14.",
"That is, it is perfectly natural for benchmark datasets to evolve as the community makes progress.",
"The only thing Dynabench does differently is that it anticipates dataset saturation and embraces the loop so that we can make faster and more sustained progress.",
"What about generative tasks?",
"For now Dynabench focuses on classification or span extraction tasks where it is relatively straightforward to establish whether a model was wrong.",
"If instead the evaluation metric is something like ROUGE or BLEU and we are interested in generation, we need a way to discretize an answer to determine correctness, since we wouldn't have ground truth annotations; which makes determining whether a model was successfully fooled less straightforward.",
"However, we could discretize generation by re-framing it as multiple choice with hard negatives, or simply by asking the annotator if the generation is good enough.",
"In short, going beyond classification will require further research, but is definitely doable.",
"Do we need models in the loop for good data?",
"The potential usefulness of adversarial examples can be explained at least in part by the fact that having an annotation partner (so far, a model) simply provides better incentives for generating quality annotation.",
"Having the model in the loop is obviously useful for evaluation, but it's less clear if the resultant data is necessarily also useful in general for training.",
"So far, there is evidence that adversarially collected data provides performance gains irrespective of the model in the loop (Nie et al., 2020; Dinan et al., 2019; Bartolo et al., 2020).",
"For example, ANLI shows that replacing equal amounts of normally collected SNLI and MNLI training data with ANLI data improves model performance, especially when training size is small (Nie et al., 2020), suggesting higher data efficiency.",
"However, it has also been found that model-in-the-loop counterfactually-augmented training data does not necessarily lead to better generalization (Huang et al., 2020).",
"Given the distributional shift induced by adversarial settings, it would probably be wisest to combine adversarially collected data with non-adversarial data during training (ANLI takes this approach), and to also test models in both scenarios.",
"To get the most useful training and testing data, it seems the focus should be on collecting adversarial data with the best available model(s), preferably with a wide range of expertise, as that will likely be beneficial to future models also.",
"That said, we expect this to be both task and model dependent.",
"Much more research is required, and we encourage the community to explore these topics.",
"Is it expensive?",
"Dynamic benchmarking is indeed expensive, but it is worth putting the numbers in context, as all data collection efforts are expensive when done at the scale of our current benchmark tasks.",
"For instance, SNLI has 20K examples that were separately validated, and each one of these examples cost approximately $0.50 to obtain and validate (personal communication with SNLI authors).",
"Similarly, the 40K validated examples in MultiNLI cost $0.64 each (p.c., MultiNLI authors).",
"By comparison, the average cost of creation and validation for ANLI examples is closer to $1.00 (p.c., ANLI authors).",
"This is a substantial increase at scale.",
"However, dynamic adversarial datasets may also last longer as benchmarks.",
"If true, then the increased costs could turn out to be a bargain.",
"We should acknowledge, though, that dynamic benchmarks will tend to be more expensive than regular benchmarks for comparable tasks, because not every annotation attempt will be model-fooling and validation is required.",
"Such expenses are likely to increase through successive rounds, as the models become more robust to workers' adversarial attacks.",
"The research bet is that each example obtained this way is actually worth more to the community and thus worth the expense.",
"In addition, we hope that language enthusiasts and other non-crowdworker model breakers will appreciate the honor that comes with being high up on the user leaderboard for breaking models.",
"We are working on making the tool useful for educa-tion, as well as gamifying the interface to make it (even) more fun to try to fool models, as a game with a purpose (Von Ahn and Dabbish, 2008), for example through the ability to earn badges.",
"We introduced Dynabench, a research platform for dynamic benchmarking.",
"Dynabench opens up exciting new research directions, such as investigating the effects of ensembles in the loop, distributional shift characterisation, exploring annotator efficiency, investigating the effects of annotator expertise, and improving model robustness to targeted adversarial attacks in an interactive setting.",
"It also facilitates further study in dynamic data collection, and more general cross-task analyses of human-and-machine interaction.",
"The current iteration of the platform is only just the beginning of a longer journey.",
"In the immediate future, we aim to achieve the following goals: Anyone can run a task.",
"Having created a tool that allows for human-in-the-loop model evaluation and data collection, we aim to make it possible for anyone to run their own task.",
"To get started, only three things are needed: a target model, a (set of) context(s), and a pool of annotators.",
"Dynabench is text-only and focuses on English, but we hope to change that soon.",
"Live model evaluation.",
"Model evaluation should not be about one single number on some test set.",
"If models are uploaded through a standard interface, they can be scored automatically along many dimensions.",
"We would be able to capture not only accuracy, for example, but also usage of computational resources, inference time, fairness, and many other relevant dimensions.",
"This will in turn enable dynamic leaderboards, for example based on utility (Ethayarajh and Jurafsky, 2020).",
"This would also allow for backward-compatible comparisons, not having to worry about the benchmark changing, and automatically putting new state of the art models in the loop, addressing some of the main objections.",
"One can easily imagine a future where, in order to fulfill reproducibility requirements, authors do not only link to their open source codebase but also to their model inference point so others can talk with their model.",
"This will help drive progress, as it will allow others to examine models' capabilities and identify failures to address with newer even better models.",
"If we cannot always democratize the training of state-of-the-art AI models, at the very least we can democratize their evaluation .",
"We would like to thank Jason Weston, Emily Dinan and Kyunghyun Cho for their input on this project, and Sonia Kris for her support.",
"ZW has been supported in part by the Canada 150 Research Chair program and the UK-Canada AI Artificial Intelligence Initiative.",
"YN and MB have been supported in part by DARPA MCS N66001-19-2-4031, DARPA YFA17-D17AP00022, and ONR N00014-18-1-2871.",
"CP has been supported in part by grants from Facebook, Google, and by Stanford's Institute for Human-Centered AI."
] | [
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"method",
"method",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Charts are commonly used for exploring data and communicating insights.",
"Generating natural language summaries from charts can be very helpful for people in inferring key insights that would otherwise require a lot of cognitive and perceptual efforts.",
"We present Chart-to-text, a large-scale benchmark with two datasets and a total of 44,096 charts covering a wide range of topics and chart types.",
"We explain the dataset construction process and analyze the datasets.",
"We also introduce a number of state-of-the-art neural models as baselines that utilize image captioning and data-to-text generation techniques to tackle two problem variations: one assumes the underlying data table of the chart is available while the other needs to extract data from chart images.",
"Our analysis with automatic and human evaluation shows that while our best models usually generate fluent summaries and yield reasonable BLEU scores, they also suffer from hallucinations and factual errors as well as difficulties in correctly explaining complex patterns and trends in charts.",
"Data visualizations such as bar charts, line charts, and pie charts are very popular for presenting quantitative data.",
"Often people use such charts to get important insights from data and make informed decisions.",
"However, it is well-known that inferring key insights from the charts can be quite challenging and time-consuming, as it may require a lot of cognitive and perceptual efforts (Prez-Echeverra et al., 2018; Whitaker and Jacobbe, 2017).",
"Automatic chart summarization is a task where the goal is to explain a chart and summarize key takeaways from it in natural language.",
"Chart summarization has several key benefits and potential applications.",
"First, chart summaries can help people identify key insights from charts that they might Equal contribution.",
"Listing order is based on the alphabetical ordering of author surnames.",
"Gold: In 2019, Singapore imported approximately 236.8 billion Singapore dollars worth of machinery and equipment, making it the country's largest import commodity by value.",
"This was followed by the import of mineral fuels and lubricants, valued at 102.7 billion Singapore dollars.",
"TAB-T5: Machinery and equipment was the most valuable commodity for Singapore in 2019, with an import value of 236.8 billion Singapore dollars.",
"Mineral fuels and lubricants were the second most valuable commodity for Singapore, with an import value of 102.7 billion Singapore dollars.",
"have missed otherwise.",
"In a study on a chart corpus, Carberry et al. (2006) found that chart authors often failed to convey key insights from charts in their corresponding textual captions.",
"Thus, automatic summarization could help authors write effective reports and articles on data facts by suggesting explanatory texts.",
"Similarly, readers could benefit from such summaries, as studies have found that captions help readers find important points by explaining visually prominent features in charts (Kim et al., 2021).",
"Chart summarization offers another important benefit of making charts more accessible to people who are visually impaired since they can use screen readers to understand what is being presented in the chart (Ferres et al., 2013).",
"Finally, the generated summaries can be leveraged for indexing documents containing charts to improve information retrieval algorithms (Li et al., 2013).",
"Despite its numerous benefits and applications, the chart summarization problem has not received much attention in the NLP community.",
"Early approaches relied on template-based text generation methods that combine statistical techniques and planning-based architecture (Reiter, 2007) to generate captions from bar and line charts (Fasciano and Lapalme, 1996; Mittal et al., 1998; Green et al., 4005 2004; Demir et al., 2012).",
"Recently, researchers considered data-driven neural models for describing tabular data (Mei et al., 2016; Gong et al., 2019).",
"However, compared to tables, charts serve a different communication goal, and so is the chart-to-text problem.",
"Unlike tables which simply list raw data, charts create visual representation of data that can draw a reader's attention to various prominent features such as trends and outliers (Kim et al., 2021).",
"For example, a line chart may depict an important trend whereas a scatterplot may visually communicate correlations and outliers.",
"Existing table-to-text approaches are not designed to explain such visually salient chart features in summaries.",
"There are two main impediments to addressing the chart summarization task.",
"First, the lack of large-scale datasets makes it difficult to solve the task using data-driven neural models.",
"Second, there are no strong baselines that utilize the latest advances in neural text generation tasks.",
"Obeid and Hoque (2020) made an initial attempt to address this problem with a dataset and a model that utilizes a Transformer (Vaswani et al., 2017) architecture.",
"However, their dataset was built by collecting a small set of charts (8,305) from a single source covering only two types of charts (bar and line).",
"Also, their approach does not exploit the recent advances in large-scale language model pretraining, which has been shown to be very beneficial for many vision and language tasks (Devlin et al., 2019; Touvron et al., 2021).",
"To our knowledge, there is no large-scale benchmark with a wider range of topics from multiple sources, covering many different chart types, and with models that employ large-scale pretraining.",
"In this work, we present a large-scale benchmark for chart-to-text with two datasets consisting of 44,096 charts covering a broad range of topics and a variety of chart types.",
"We introduce two variations of the problem.",
"The first variation assumes that the underlying data table of a chart is available, while the other introduces a more challenging and realistic scenario by assuming that the chart is in image format and the underlying table is not available.",
"These two problem scenarios motivated us to adapt a variety of state-of-the-art models that combine computer vision and natural language generation techniques as strong baselines; see Fig. 1 for a sample model output.",
"Our primary contributions are: ( i ) a new large-scale benchmark covering a wide range of topics and chart types; ( ii ) a set of state-of-the-art neural models which can act as a starting point for other researchers to expand and improve upon; and ( iii ) a series of automatic and human evaluations as well as in-depth qualitative analysis to identify further challenges.",
"Our code and benchmark datasets are publicly available at https://github.com/vis-nlp/Chart-to-text.",
"Chart Summarization Early work (Mittal et al., 1998; Ferres et al., 2013) followed a planning-based architecture (Reiter, 2007) and used templates to generate texts.",
"These systems only describe how to read the chart rather than explain key insights conveyed by the chart.",
"Recently, commercial systems such as Quill and Wordsmith 1 as well as research prototypes, e.g., (Cui et al., 2019) and (Srinivasan et al., 2018) computed statistics ( e.g., extrema, outliers) to present facts from a dataset.",
"Demir et al. (2012) also compute statistics to generates bar chart summaries in a bottomup manner to simultaneously construct the discourse and sentence structures.",
"Recently, Chen et al. (2019) used the ResNet (He et al., 2016) to encode the chart image and an LSTM decoder to create the caption.",
"A key limitation of the above bodies of work is that sentences are generated using predefined templates, which may lack generality and offer little variation in terms of reported insights, grammatical styles and lexical choices compared to data-driven models.",
"Moving beyond template-based summaries, Obeid and Hoque (2020) adapted a transformer-based model on a dataset of 8,305 charts, while Spreafico and Carenini (2020) applied an LSTM based encoder-decoder model on a dataset of 306 chart summaries.",
"Both studies used much smaller datasets and did not consider the computer vision aspects of the problem.",
"Hsu et al. (2021) recently use a CNN+LSTM based image captioning model for scientific figure captioning.",
"In contrast, we focus on the generic chart-to-text problem and train several neural models that combine computer vision and data2text generation.",
"Data2text Generation Data2text models generate a descriptive summary for a table of records.",
"They have been used for various domain-specific tasks such as summarizing sports data (Barzilay and Lapata, 2005; Wiseman et al., 2017), weather-1 Narrative Science Quill; Automated Insights Wordsmith 4006 forecast data (Reiter et al., 2005), recipe generation (Yang et al., 2017) and biography generation (Lebret et al., 2016) as well as open-domain tasks (Parikh et al., 2020; Chen et al., 2020a).",
"Recent methods have primarily used an LSTM-based encoder-decoder architecture (Mei et al., 2016; Le-bret et al., 2016; Wiseman et al., 2017).",
"Gong et al. (2019) found that transformers (Vaswani et al., 2017) yielded more fluent and coherent outputs compared to their LSTM counterparts.",
"Others focused on controlling the structure of the summary using a planning approach (Su et al., 2021) as well as generating facts by preforming logical inference over the given table (Chen et al., 2020a,b).",
"Image Captioning There has been swift progress in image captioning largely due to the availability of large-scale datasets (Agrawal et al., 2019; Chen et al., 2015).",
"Zhang et al. (2021) developed an object detection model to summarize objects in images while Sidorov et al. (2020) utilized texts extracted from images using OCR to generate captions.",
"Unlike images with real-world objects and scenes, charts have marks ( e.g., bars, lines) that map quantitative data.",
"This makes the chart-to-text problem different from image captioning.",
"After searching through various sources including news sites, textbooks, and websites containing data facts, we found two suitable sources with sufficiently large numbers and varieties of charts with textual descriptions as we describe below.",
"Statista Statista (statista.com) is an online platform that regularly publishes charts on a wide range of topics including economics, market and opinion research.",
"We crawled 34,810 publicly accessible webpages in December 2020, yielding a total of 34,811 charts.",
"For each chart, we took a screenshot of the chart image, downloaded the data table, the title, axis labels and the human-written descriptions about the chart.",
"We classified the charts into two groups based on the number of columns in their underlying data tables: Data tables of simple charts have only two columns, whereas complex charts involve at least three columns ( e.g., stacked or group bar charts, line charts with multiple lines).",
"public opinion and demographic trends.",
"The articles are often accompanied by multiple charts along with high-quality descriptions written by professional editors.",
"We scraped 3,999 publicly accessible pages in January 2021, which gave a total of 9,285 charts.",
"Unlike Statista, the Pew reports do not provide the underlying data tables for most of the charts.",
"Among 9,285 charts, only 143 have underlying data tables.",
"For each chart, we downloaded the chart image, the surrounding paragraphs and the alternative text associated with the image (using the alt attribute), if it was available.",
"Like a title, the alt text often gives a very short chart description.",
"Finally, we classified the charts into simple and complex manually since underlying data tables were unavailable.",
"Below we describe two main steps of the data annotation process for each chart: ( i ) identify the relevant summary, and ( ii ) extract data.",
"Additional details of these steps are provided in Appendix A.1.",
"Statista We chose the first part of the text (from the chart icon to the next heading) as the chart summary.",
"This is based on the observation that the first part provides a succinct summary of the chart while the remaining parts often contain background information ( e.g., the history of a company).",
"Extracting data from the Statista charts was relatively straightforward as the underlying data tables were available.",
"However, most charts (32,660 out of 34,811) did not provide x-axis labels.",
"To assign representative labels for them, we first used regular expressions on the cell values of such a column to see if it represents common entities ( e.g., year , location ).",
"Still, there were 7,170 missing labels remaining.",
"We then applied the Wikidata knowledge base (Wik, 2021) to automatically derive an entity type label based on the data values plotted on x-axis.",
"However, sometimes the resulting labels were too generic ( e.g., human , business ).",
"Hence, we manually annotated each label by either accepting the entity type label, if it represents the x-axis accurately, or entering a more specific name.",
"Pew The annotation for Pew was more challenging as often a webpage contains many charts and paragraphs do not explicitly refer to their relevant chart.",
"Also, most charts did not have underlying data tables.",
"To address these challenges, we construct the dataset in three stages (Fig. 2).",
"( i )",
"Data extraction from chart images: We first extracted the text from the charts using CRAFT (Baek et al., 2019a,b), a state-of-the-art OCR model.",
"We then extracted the bounding boxes of the detected texts to extract geometric features ( e.g., normalized width and height of the text) and used them to train a gradient boosting classifier that categorizes the recognized text into one of the following categories: title, axis labels, legends, and data labels.",
"Since the visual style and structure vary among chart types, we trained a separate classifier for each chart type.",
"We manually labeled 319 examples (171 bar, 68 line, and 80 pie charts) and split them into train, validation, and test splits with 8:1:1 ratios, respectively.",
"Our models achieved a precision of 95.0% overall and 97.6% for title classification on our test set.",
"We then used our models to predict the text roles for the remaining charts in the Pew dataset.",
"We used the extracted title as the final chart title if there was no associated alt text with the chart image.",
"If the alt text was available, we took the longer one by comparing it with the extracted title.",
"( ii )",
"Identification of candidate paragraphs: We observed that relevant paragraphs tend to appear in close proximity to a given chart and share some content with the chart ( e.g., axis labels, data values).",
"We first used this proximity criteria to form a list of candidate paragraphs L c .",
"Specifically, for each chart, we selected the paragraph adjacent to the chart as well as the five paragraphs before and after it as candidates (maximum of 11 in total).",
"Next, we used a heuristic-based approach to automatically select a subset of relevant paragraphs L r L c .",
"We estimated the relevance score of each paragraph in L c to its corresponding chart as rel = content proximity , where content takes a weighted sum of the number of tokens matched between the paragraph and the OCR-extracted text (numerical tokens were given a higher weight than Statista Pew Type Simple Complex Simple Complex Bar 24,591 5,616 807 5,497 Line 2,646 902 325 2,129 Area 0 0 29 105 Scatter 0 0 0 68 Pie 409 0 325 0 Table 223 424 0 0 Total 27,869 6,942 1,486 7,799 Table 1: Chart type distribution. lexical tokens as they were better indicators of rel-evance), and proximity is based on the distance between the chart and the paragraph.",
"If rel exceeds a threshold and some minimum number of lexical and numerical tokens are matched between the paragraph and chart, we consider such a paragraph to be relevant to the chart.",
"We set this threshold empirically and chose it to be aggressively high to prioritize precision over recall.",
"We evaluated the efficacy of our approach against a randomly sampled set of 95 charts and 769 surrounding paragraphs and found a recall of 21.1% and a precision of 100%.",
"Given the perfect precision score, we considered the paragraphs in L r to be relevant and to confirm the relevance of the remaining paragraphs, we performed a human study.",
"( iii )",
"Selection of relevant paragraphs: We asked crowdworkers on Amazon Mechanical Turk to label how relevant each paragraph is to its chart.",
"A total of 5,478 charts and 13,237 paragraphs were annotated.",
"Each chart received two annotations from two workers.",
"If both workers labeled a paragraph as either completely irrelevant or relevant (partially/completely), we used the label that they agreed upon as the final label.",
"2 For the remaining 2,888 paragraphs where the workers disagreed, we resolved them through internal annotation.",
"Our chart-to-text datasets contain a diverse range of chart types (Table 1).",
"Bar charts make up the majority of the charts both in Statista (87.9%) and Pew (67.9%) for both simple as well as stacked and group bar charts.",
"The next most common type is line charts (10.2% in Statista and 26.4% in Pew).",
"To analyze the topic distribution, we extracted the topic of each chart using its webpage's metadata ( e.g., breadcrumbs, meta-tags).",
"Our datasets cover a broad range of topics including politics, society and health (see Fig. 9 in Appendix A.3).",
"The topics in Statista are more evenly distributed than the ones in Pew, which is dominated by U.S. Politics & Policy (45.4%).",
"Table 2 presents basic linguistic statistics about the datasets.",
"The summaries in Pew are about twice as long as the those in Statista, in terms of average character, token and sentence count.",
"Unsurprisingly, complex charts generally have longer summaries than their simple counterparts.",
"We further analyzed the semantic content of the summaries using 100 randomly sampled chart-summary pairs from each dataset.",
"Table 3 shows the distribution of sentences across the four main types of semantic content.",
"3 We notice that statistical and comparative information ( e.g., min, max, avg.) is the most common type of content in both datasets.",
"Summaries in Pew tend to report more insights that require more perceptual and cognitive efforts ( e.g., trends and causal relations) which are arguably more challenging to generate compared to simple statistics.",
"Both datasets contain comparable proportions of sentences covering contextual and domain-specific information.",
"Unlike Statista, Pew summaries rarely explain the chart types and encodings ( e.g., what do the xand yaxes represent).",
"Problem Definition We consider two variations of the chart-to-text problem.",
"In the first variation, we assume that the underlying data table of the chart is available, where the dataset can be represented as a set of 4-element tuples D = 3 Our categorization of content is inspired by a recent study (Lundgard and Satyanarayan, 2022).",
"{ C, T, M, S n } D n = 1 with C , T , M and S representing the chart image, data table, metadata and textual summary, respectively.",
"For each cell in the data table T , we have the following information: ( i ) the string value, ( ii ) the row and column positions, and ( iii ) whether it is a header cell or not.",
"The metadata M = ( C title , C type , C labels ) consists of the title, type ( e.g., bar, line) and axis labels.",
"In the second variation, we assume that the data table is not available which makes the problem more challenging as well as realistic because most charts online are in image format and do not have the underlying data tables.",
"For a given input X = C, T, M or C, M , our goal is to generate a textual description S which is a good summary of the chart according to a set of evaluation measures.",
"We consider three categories of models to tackle the task.",
"The first category is image captioning models, where the task is formulated as generating a textual description for the given chart image.",
"The second category is data-to-text models, which rely on the underlying data tables of the charts to produce the corresponding descriptions.",
"Finally, we consider a combination of vision and text models, where the models first extract the text using the CRAFT OCR model (Baek et al., 2019b) and then train with a data-to-text setup.",
"We present three categories of models below (hyperparameter settings for all the models are provided in Appendix A.3).",
"We develop over the Show, Attend, and Tell (SAT) model (Xu et al., 2015) to probe the effectiveness of this category of models for our task.",
"Following Xu et al. (2015), we use the ResNet50 (He et al., 2016) as the image encoder and a unidirectional LSTM (Hochreiter and Schmidhuber, 1997) as the decoder for text.",
"As the pretrained ResNet50 model is trained on object detection tasks on ImageNet (Deng et al., 2009), directly applying it to chart images gave poor results in our experiments.",
"Also, we do not have any object labels for the chart images to train the encoder.",
"Hence, we employ the recently proposed self-supervised strategy called Barlow Twins (Zbontar et al., 2021) which tries to make the embedding vectors of distorted versions of an image sample to be similar, while minimizing the redundancy between the components of these vectors.",
"It achieves state-of-the-art results for ImageNet classification with an accuracy gap of only 3.3% from the supervised model.",
"We pretrain a 4009 separate ResNet50 with Barlow Twins for each of our datasets and use it as an encoder in the model.",
"Chart2text (Obeid and Hoque, 2020) is an adapted transformer model for chart-to-text based on the data-to-text model of Gong et al. (2019).",
"It takes a sequence of data records as input with each record being a set of tuples ( e.g., column header, cell value, column index) and embeds them into feature vectors with positional encodings to distinguish orders (Fig. 3a).",
"The model includes an auxiliary training objective (binary labels indicating the presence of the record in the output sequence) on the encoder to maximize the content selection score.",
"It also implements a templating strategy of target text with data variables ( e.g., cells , axis labels ) to alleviate hallucination problems.",
"Since in Pew data tables are not available, we use OCR-generated texts as inputs which are linearized and embedded into feature vectors.",
"The bounding box information of OCR-generated data of each chart is also embedded and concatenated to the table vectors to provide positional information to the model.",
"Field-Infusing Model (Chen et al., 2020a) is inspired by the concept-to-text work (Lebret et al., 2016).",
"The values in a cell are first encoded with an LSTM, which is then concatenated with the embeddings of row index and column heading.",
"These table representations ( h 1 , h 2 in Fig. 3b) are then fed into a 3-layer Transformer encoder-decoder model to generate the target summaries.",
"Additionally, for Pew, we embed the bounding box information of the chart OCR-texts and concatenate it to the LSTM-based field representation as an auxiliary positional information to the model.",
"BART (Lewis et al., 2020) adopts a seq2seq Transformer architecture with denoising pretraining objectives.",
"It is particularly pretrained to be effective for text generation tasks.",
"For our chart-to-text tasks, we flatten the data table row by row and concatenate the title with table content as the input to the encoder (Fig. 3c).",
"In the absence of data tables, we concatenate all the OCR-texts in a top to bottom order and fed it to the model as input.",
"T5 (Raffel et al., 2020) is a unified seq2seq Transformer model that converts various NLP tasks into a text2text generation format.",
"It is first pretrained with a fill-in-the-blank' denoising objective, where 15% of the input tokens are randomly dropped out.",
"The spans of consecutive dropped-out tokens are replaced by a sentinel token.",
"The decoder then has to predict all of the dropped-out token spans, delimited by the same sentinel tokens used in the input.",
"This is different from the pretraining objective of BART where the decoder predicts the entire original sequence (not just the dropped spans).",
"T5 is fine-tuned with several supervised multi-task training objectives ( e.g., machine translation, text summarization).",
"We format the input in the same way as for the BART models.",
"Specifically, we add translate Chart to Text: \" to the prefix of the input to mimic the pretraining process (see Fig. 3c). For OCR-based input, we experiment with two T5 model variants. In the first variant, we concatenate all the OCR-extracted sentences from the chart image in a top to bottom order and fed it to the model as input. In the second, we modify the input to accommodate the spatial information of the detected texts. Inspired by Tan and Bansal (2019), we feed the bounding box coordinates of each detected text token into a linear layer to produce positional embeddings which are then added to their corresponding embeddings of the OCR tokens as input. 5 Evaluation 5.1 Automatic Evaluation Measures For automatic evaluation of the summary quality, we utilized five measures. BLEU (Post, 2018) and CIDEr (Vedantam et al., 2015) measure n-gram overlaps between the model generated text and the reference text. CIDEr computes TF-IDF weighted n-gram overlaps. BLEURT (Sel-lam et al., 2020) is a model-based evaluation metric that indicates to what extent the candidate is grammatical and conveys the meaning of the reference. We use BLEURT-base-128. Content Selection (CS) metric measures how well the generated summaries match the gold summaries in terms of selecting records to generate (Wiseman et al., 2017). Since both the BLEURT and CS are calculated at the sentence-level, we average these scores over the whole test set. Finally, for readability and fluency, we measure Perplexity (PPL) using a pre-trained GPT-2 Medium (Radford et al., 2019). Results In general, from the results in Table 4, we notice that large-scale unsupervised pretraining ( i.e., -BART\", -T5\") helps to boost the performance significantly.",
"In terms of the model variants, the image captioning model has failed to capture 4010 Transformer Encoder Prediction Layer Transformer Decoder Softmax Substitute Variables Quarter|Q3'20|x|line_chart 0/1 0/1 0/1 <sos> How many users <eos> How many templateLabel[1][2] <eos>",
"relevant information from charts (low CS score) even though it generates fluent text (low PPL).",
"On Statista, when the data tables are available, Chart2text and Field-Infuse models are able to extract information from the data table, but they struggle to produce texts with good quality.",
"This could be because these models did not use any large-scale pretraining.",
"On the other hand, TAB-BART and TAB-T5 are able to produce well-structured and relevant summaries.",
"The OCR-based models can generally generate fluent summaries but they are slightly less effective in extracting the relevant information since the OCR process introduces some noise in the input data.",
"We also experiment with automatically extracted tables to see how the models perform in the absence of gold data tables.",
"To this end, we extended ChartOCR (Luo et al., 2021), which predicts the raw data values of chart elements, to extract the fully-structured data table.",
"The accuracy of automatic data extraction was 77.31% (see Appendix A.5 for details).",
"We find that similar to OCR-based models, TAB_OCR-based models tend to be less effective in extracting the relevant information compared to their TAB-based counterparts which use ground truth data tables.",
"Pew, on the other hand, is much challenging because it contains many charts with ill-defined structure and the underlying data tables are not available.",
"Unsurprisingly, the performance of all the models has dropped significantly compared to that on Statista.",
"Nonetheless, we can see that without the presence of the underlying data table, the vision+text (OCR-based) models have brought notable improvements over the vision only model.",
"Further breakdown of model performance based on chart types is provided in Appendix A.4.2.",
"We also evaluate the transferability of the models and the datasets, where we first pretrain a model on a source dataset and fine-tune it on the target dataset.",
"In addition to our two datasets (Statista or Pew), we experiment with ToTTo (Parikh et al., 2020) as another source dataset, which is a large-scale open-domain English table-to-text dataset.",
"Our results show that pretraining on other datasets only brings about marginal improvement.",
"Details of this experiment can be found in Appendix A.4.1.",
"To further assess the summary quality we performed a human evaluation on 150 randomly sampled charts from the Statista dataset with four internal annotators who are native speakers of English.",
"For each chart, annotators performed pairwise comparisons between the outputs of TAB-T5, OCR-T5 and the original gold summary (served as a control), resulting in a total of 450 pairwise comparisons (Appendix A.4.3).",
"They compared the summaries based on three criteria: ( i ) Factual correctness : Which summary is more factually 4011 TAB-T5 (1) vs. OCR-T5 (2) Gold (1) vs. TAB-T5 (2) Gold (1) vs. OCR-T5 (2) Summary Factual Coherence Fluency Factual Coherence Fluency Factual Coherence Fluency Summary 1 Win 55.3% 23.3% 20.0% 30.0% 36.7% 22.0% 59.3% 43.3% 28.7% Summary 2 Win 12.0% 11.3% 11.3% 13.3% 16.7% 14.0% 7.33% 15.3% 17.3% Tie 32.7% 65.3% 68.7% 56.7% 46.7% 64.0% 33.3% 41.3% 54.0% p -value (sign test) 1.86e-11 8.77e-3 0.0395 1.31e-3 5.26e-4 0.0668 1.27e-16 4.25e-6 0.0266 Table 5: Human evaluation results for comparing between the outputs of TAB-T5, OCR-T5 and the gold summary.",
"correct ( i.e., facts mentioned are supported by the chart)?",
"( ii )",
"Coherence : Which summary is more coherent ( i.e., sentences are well connected)?",
"and ( iii ) Fluency : Which summary is more fluent and grammatically correct?",
"For each criterion, the annotator picked the better one (win) or equally good (tie).",
"Each comparison was performed by one annotator, except the first 150 comparisons for which we had two annotators to measure the agreement.",
"The agreement for these 150 comparisons, excluding ties, was 74.3% (ties were excluded since they do not affect the overall ranking of the summaries).",
"Table 5 shows that the TAB-T5 performed significantly better than OCR-T5 based on all three criteria, especially on factual correctness.",
"This is likely because, without the data table as input, OCR-T5 model often fails to generate factually correct statements from the OCR text.",
"We also observe that while the fluency of the model outputs is comparable to the gold summary, their factual correctness and coherence were significantly worse, especially for the OCR-T5 model.",
"We manually analyzed 200 random samples from Statista and Pew.",
"We chose TAB-T5 and OCR-T5 for Statista and OCR-BART and OCR-T5 models for Pew.",
"This analysis helps us to understand model errors and identify key challenges that existing models face as we describe below.",
"Perceptual and reasoning aspects As mentioned in 1, charts often describe complex patterns and trends which can be perceived by humans easily but they are not necessarily easy to derive through analysis of raw data tables.",
"In Fig. 4b, the OCR-T5 model manages to describe a trend correctly in the first sentence but describes a trend incorrectly in the last sentence.",
"These examples demonstrate the shortcomings of existing models.",
"In order to explain perceptual and reasoning aspects effectively, we need more sophisticated models that better capture prominent visual relationships in charts.",
"In particular, we aim to develop better representations including semantic graph representation of the chart that encodes numerical and logical relationships among chart objects.",
"Hallucinations Sometimes, the model outputs tokens that are irrelevant to the chart.",
"For example, while the model outputs in Fig. 4a,b are quite fluent, they contain hallucination errors.",
"This problem is commonly observed in other data-to-text work as well (Wiseman et al., 2017; Parikh et al., 2020).",
"Factual errors Factually incorrect statements are more common for the OCR-based models ( e.g., in Fig. 4a-b) since they do not take the data table as input, thus fail to associate the data values correctly.",
"In contrast, TAB-T5 which utilizes the data table as input tends to generate less factual errors.",
"This confirms that summarizing charts when the data table is not available is usually more challenging.",
"Computer vision challenges The factual errors illustrate some unique computer vision challenges.",
"First, charts do not always show data values as text labels, thus the OCR models cannot access those values.",
"Even if the data values are labeled, the absence of association between data values ( e.g., Instagram is related to 380.09M in Fig. 4a) leads to factual errors.",
"This problem might be alleviated if the model can extract the data table from a chart image.",
"While there are some initial attempts in this direction ( e.g., Luo et al. (2021); Choi et al. (2019)), more accurate data extraction from charts is necessary.",
"Generalizability The charts in our benchmark cover several different chart types and a wide variety of topics (fig. 9).",
"The charts in the Pew in particular have a wide variety of visual styles in terms of color, layout and typography as they were created over several years by different authors (see examples in fig. 1).",
"Nevertheless, finding more chart-summary pairs with more diverse visual styles is an open challenge.",
"In future, we aim to find more different sources of chart-summaries and perform cross-domain experiments across those different sources to evaluate the generalizability of models.",
"We have presented two large-scale datasets for chart summarization.",
"We also provided several state-of-the-art baselines and measures.",
"Our evaluation highlights the promise of these baselines and also reveals several unique challenges for the chart summarization task.",
"We hope that Chart-to-text will serve as a useful research benchmark for model and metric development and motivate other researchers to explore this relatively new area.",
"The authors would like to thank the anonymous reviewers for their helpful comments.",
"This research was supported by the Natural Sciences & Engineering Research Council (NSERC) of Canada.",
"During the dataset collection and annotation process, we had many ethical issues to take into consideration.",
"To respect the intellectual property of the chart publishers, we only used publicly available charts from resources that provide publication rights of downloaded content for academic purposes.",
"According to the terms of use and publication rights for Statista, 4 users are granted publication rights only to free studies of Statista, so we only used the free publicly available webpages.",
"According to the terms and conditions for Pew, 5 users are allowed to use the content as long as they are attributed to the Center or are not attributed to a different party.",
"To fairly compensate the Mechanical Turk annotators, we compensated the annotators based on the minimum wage in the United States at the time (7.25 US$ per hour) and the estimated time taken for each task (1 minute).",
"Hence, these annotators received 0.10 0.15 US$ for each chart, depending on the number of candidate paragraphs associated with it.",
"Additionally, to protect the privacy of these annotators, all of their annotations were anonymized.",
"To ensure the reproducibility of our experimental results, we have provided the hyperparameter settings and estimated training time in Appendix A.3.",
"We foresee one possible misuse of our models that is to spread misinformation.",
"Currently, our model outputs tend to appear fluent but contain some hallucinations and factual errors, as detailed in 5.3.",
"Hence, if such model outputs are published without being corrected, it may mislead and misinform the general public."
] | [
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"objective",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"In this paper, we study the challenging problem of automatic generation of citation texts in scholarly papers.",
"Given the context of a citing paper A and a cited paper B, the task aims to generate a short text to describe B in the given context of A. One big challenge for addressing this task is the lack of training data.",
"Usually, explicit citation texts are easy to extract, but it is not easy to extract implicit citation texts from scholarly papers.",
"We thus first train an implicit citation text extraction model based on BERT and leverage the model to construct a large training dataset for the citation text generation task.",
"Then we propose and train a multi-source pointer-generator network with cross attention mechanism for citation text generation.",
"Empirical evaluation results on a manually labeled test dataset verify the efficacy of our model.",
"This pilot study confirms the feasibility of automatically generating citation texts in scholarly papers and the technique has the great potential to help researchers prepare their scientific papers.",
"A scientific paper usually needs to cite a lot of reference papers and introduce each reference paper with some text.",
"In this study, the text describing a reference paper is called citation text.",
"A researcher usually needs to find relevant papers he wants to cite and write some text to introduce them when writing a scientific paper.",
"However, the process of writing citation texts is tedious and time-consuming.",
"In order to reduce the burden of researchers, we propose and try to address the task of automatic citation text generation.",
"Automatic generation of citation texts in scholarly papers is a challenging and meaningful task, however, there are very few studies investigating this problem.",
"Given a cited paper B and the context in a citing paper A (i.e., the sentences before and after a specific position in paper A), the task aims to generate a short text to describe B with respect to the given context in A. The task is like the task of scholarly paper summarization (Luh-n, 1958; Edmundson, 1969; Qazvinian and Radev, 2008; Mei and Zhai, 2008).",
"Both of the two tasks aim to produce a text to describe the cited paper B. The major difference between the two tasks is that the citation texts reflect not only the salient content of B, but also the context of A. Different citing papers usually have different descriptions of the same cited paper.",
"Sometimes one paper may cite another paper several times in different positions but give different descriptions because the specific contexts are different.",
"Another difference between the two tasks is the length of the text.",
"A citation text is usually much shorter than a paper summary.",
"Generally, citation text generation can be considered as a task of generating a very short summary of paper B given the context of paper A. The difficulty lies in that given different A or different contexts of A, the task aims to produce different citation texts for the same B. Most commonly, the citation text is a single sentence, but sometimes it may consist of several sentences (Jebari et al., 2018; Qazvinian and Radev, 2010; Sondhi and Zhai, 2014).",
"Like (Small, 2011), we define citation text as a block of text composed of one or more consecutive sentences surrounding the reference sign.",
"Each citation sentence can be classified as explicit or implicit (Qazvinian and Radev, 2010; Athar and Teufel, 2012; Yasunaga et al., 2019).",
"Explicit citation is a citation sentence that contains explicit reference to the cited paper.",
"An implicit (or non-explicit) citation sentence appears around the explicit citation sentence and it does not attach any explicit reference to the cited paper but supplies additional information about the cited paper.",
"The citation text generation task in this study aims to generate both explicit and implicit citation sentences.",
"We build a citation text generation dataset based on the ACL Anthology Network corpus (AAN) (Radev et al., 2013).",
"We first perform human annotation and get 1,000 citation texts (including explicit and implicit citation sentences).",
"We randomly select 400 citation texts as test set, and use the other 600 citation texts to first train a citation text extraction model and then use the extraction model to automatically extract many more citation texts to build a large-scale training dataset.",
"With the training dataset we construct, we can train our citation generation model.",
"In this paper, we use pointer-generator network (See et al., 2017) as the baseline model.",
"We believe that the key to dealing with citation text generation problem is modelling the relationship between the context of citing paper A and the content of cited paper B. So we encode the context of paper A and the abstract of paper B separately, and add cross attention mechanism by making context and abstract attend to each other.",
"We call our model multi-source pointer-generator network with cross attention mechanism.",
"The evaluation results show that our model outperforms the baseline models.",
"Our contributions are summarized as follows: We propose a new task of automatic citation text generation in scholarly papers.",
"We annotate 1,000 citation texts and train a citation extraction model to automatically construct a large training dataset for the citation text generation task.",
"The data are available at https://github.com/XingXinyu96/ citation_generation .",
"We propose the multi-source pointer-generator network with cross attention mechanism to address this challenging task.",
"Evaluation results demonstrate the efficacy of our proposed model.",
"Firstly, we introduce some studies on citation extraction.",
"Kaplan et al. (2009) proposed a method based on coreference-chains for citation extraction.",
"Sondhi and Zhai (2014) first independently trained a separate HMM for each citation in the article and then performed a constrained joint inference to label non-explicit citing sentences.",
"Qazvinian and Radev (2010) proposed a framework based on probabilistic inference to extract implicit citations.",
"Jebari et al. (2018) proposed an unsupervised approach which is based on topic modeling and word embedding for implicit citation extraction.",
"Jebari et al. (2018) introduced method based on neural network but it did not give out convincing evaluation results.",
"A few studies have investigated the task of summarizing single scholarly paper, i.e., single document summarization in the scientific domain, which is relevant to the citation text generation task.",
"Early works include (Luhn, 1958; Baxen-dale, 1958; Edmundson, 1969), and they tried to use various features specific to scientific articles for summary extraction.",
"Later on, citation information has shown its usefulness for scientific paper summarization (Qazvinian and Radev, 2008; Mei and Zhai, 2008; Qazvinian and Radev, 2010; Cohan and Goharian, 2018; Yasunaga et al., 2019).",
"Several benchmark tests have been set up for scientific summarization, including TAC 2014 Biomedical Summarization track and the CL-SciSumm Shared Task (Jaidka et al., 2016).",
"A few other studies have investigated the task of summarizing multiple scholarly papers, i.e., multi-document summarization in the scientific domain (Mohammad et al., 2009; Yeloglu et al., 2011; Chen and Zhuge, 2014).",
"Related work generation is a special case of multi-document scientific summarization (Hoang and Kan, 2010; Hu and Wan, 2014; Chen and Zhuge, 2019).",
"However, the above related work about scholarly paper summarization is different from the task of citation text generation, which aims to generate a usually very short text to describe the cited paper in the given context of the citing paper.",
"Formally, given a citing paper A, a cited paper B and the context C in A, the task aims to generate the citation text T to describe B. The context C refers to the sentences surrounding the target citation text in A and it is provided to distinguish different mentions of B in different positions of A. The following example shows a paragraph of (Lu et al., 2008) and this article cites paper (Wong and Mooney, 2006).",
"In this example, A refers to (Lu et al., 2008) and B refers to (Wong and Mooney, 2006).",
"The sentence underlined (i.e., the second sentence) is an explicit citation, and the sentence in italics (i.e., the third sentence) is an implicit citationand both of them compose the citation text.",
"The remaining two sentences (i.e., the first and last sentences) compose the context C of A. The phrase in bold which indicates the explicit citation to paper B is called reference sign.",
"And the explicit citation text can be defined as the sentence with a reference sign to the cited paper.",
"The implicit citation text can be defined as the sentences that provide information about the cited paper but do not have any reference sign.",
"...SILT (Kate et al., 2005) learns deterministic rules to transform either sentences or their syntactic parse trees to meaning structures.",
"WASP (Wong and Mooney, 2006) is a system motivated by statistical machine translation techniques.",
"It acquires a set of synchronous lexical entries by running the IBM alignment model and learns a log-linear model to weight parses.",
"KRISP (Kate and Mooney, 2006) is a discriminative approach ...",
"In this study, we build a citation generation dataset based on the ACL Anthology Network corpus (AAN) (Radev et al., 2013).",
"The ACL anthology is a collection of papers from the Computational Linguistics journal, and proceedings from ACL conferences and workshops.",
"In particular, we download and use the 2014 version of the AAN corpus which includes almost 23594 papers.",
"After removing papers containing many garbled characters and papers without abstracts, there remains 16675 papers.",
"The metadata of each paper and the paper citation network have been extracted and stored.",
"We find all the mentions of each reference paper in a citing paper by using manually designed regular expressions to match the corresponding reference signs.",
"Lastly, we extract 86052 explicit citations for further use.",
"For each reference sign, we perform human annotation to get all citation sentences.",
"We label a vector in which each dimension corresponds to a sentence.",
"A sentence is marked with C if it is an explicit citation, and with 1 if it is an implicit citation.",
"All other sentences are marked with",
"0. The label vector of the example we mentioned before is [0,C,1,0].",
"Our annotation process has two steps.",
"First, we annotate the explicit citation sentences.",
"Despite we have extracted explicit citations with rules, we cannot assure that the extraction is completely correct.",
"In order to accurately evaluate the performance of our methods, the explicit citations in the test dataset should be human annotated.",
"We randomly choose some automatically extracted explicit citations and highlight the reference signs we find.",
"The annotators only need to judge if they think the extraction of reference sign is correct.",
"We stop this step when we get 1,000 explicit citations which are ensured correct by human.",
"The second step is to annotate implicit citation texts.",
"For each explicit citation sentence, we take three sentences before it and three sentences after it as candidate sentences 1 .",
"Note that all the candidate sentences must be in the same section as the explicit citation sentence.",
"We provide candidate sentences, explicit citation sentence, abstract of citing paper and cited paper for every annotator.",
"Explicit citation sentence has already been labelled with C, and the annotators just need to label other sentences with 1 or",
"0. Note that we require the citation sentences to be continuous, which means there cannot be non-citation sentences between two citation sentences.",
"To make the data more reliable, we make sure that every annotation instance must be annotated by three different people.",
"When they disagree with each other, we take the label chosen by majority.",
"After the annotation process, we get 1,000 annotated citation texts (including both explicit and implicit citation sentences) for further use.",
"We randomly choose 400 citation texts as the final test dataset and the remaining citation texts are used for training.",
"After the annotation process, we have 400 citation texts as test dataset and 600 citation texts for training.",
"However, we need large-scale training data to train a feasible citation text generation model.",
"So we decide to use the 600 human annotated citation texts to train an implicit citation text extraction model to expand our training dataset.",
"We treat implicit citation text extraction as a sequence labeling problem and use BERT (Devlin et al., 2018) to deal with this problem.",
"We add a classification layer on the final hidden representation of BERT and fine-tune the whole model on our dataset.",
"We concatenate all the candidate sentences, the explicit citation sentence and the abstract of the cited paper as the input of BERT.",
"We add a special tag '[s]' at the beginning of all sentences, a special tag '[explicit]' at the beginning of the explicit citation sentence and a special tag '[abs]' at the be-1 For simplicity, we do not consider the sentences with a long distance to the explicit citation.",
"ginning of the cited paper's abstract.",
"The abstract of cited paper does not need to be labelled but it can provide a lot of information to help label the candidate sentences.",
"BERT gives out the probability of every sentence to be implicit citation.",
"We set a threshold to control the identification of implicit citation sentence.",
"When the probability given out by BERT is greater than , we take the corresponding sentence as an implicit citation sentence.",
"It is obvious that the smaller is, the more sentences will be recognized as implicit citation sentence.",
"To ensure the citation text being continuous, we start to identify implicit citation sentences from the explicit citation sentence to both sides and stop when meeting the first non-citation sentence.",
"We do 10 fold cross-validation on our training dataset and use the 400 test data as external test data.",
"The 600 training data are split into 10 subsets.",
"When training, we use 9 subsets for training and use the remaining one subset as test set.",
"The average results for cross-validation are shown in Table",
"1. The average results on external test data are shown in Table",
"2. Our model is compared with these baseline models: All one : It labels all candidate sentences with",
"Cosine sim : It first uses bag of words model to represent all texts as vectors.",
"Then it calculates the cosine similarity between candidate sentence and cited paper's abstract, and the cosine similarity between candidate sentence and the explicit citation sentence.",
"When the two similarities are both greater than the threshold, the sentence is labelled with",
"1. W2v sim : This model is also based on similarity.",
"The similarity in this model is calculated based on word2vec model.",
"With two sequence of words, it first gets the corresponding two sequences of Precision Recall F-value Acc All one 12.67 100.00 22.49 12.67 Random 12.31 49.40 19.71 49.01 Cosine sim 16.87 54.62 25.78 60.15 W2v sim 19.43 54.62 28.66 65.55 SVM 34.39 26.10 29.68 84.33 Table 3: Test results on external test data for the baseline models Precision Recall F-value Acc =0.9 73.66 60.64 66.52 92.26 =0.1 66.02 68.67 67.32 91.55 Table 4: Test results on external test data when using full training data vectors { u i } and { v j } with word2vec model.",
"Then it uses the two sequences of vectors to calculate a similarity matrix M .",
"The element of the matrix M i,j = cos ( u i , v j ) .",
"Finally it keeps the max value of every row vector and takes the average value of the max value list as the final similarity.",
"SVM : It trains an SVM to classify if a sentence is implicit citation sentence.",
"The features include sentence position feature, special pattern feature, similarity feature, etc.",
"Results of all these baseline models are shown in Table",
"3. As shown in these tables, our extraction model outperforms all the baseline models.",
"The F-value of our extraction models with =0.1 and =0.9 are very close.",
"This indicates that they have close performance.",
"The precision of extraction model with =0.9 is higher, while the recall of extraction model with =0.1 is higher.",
"So we can get two different extraction models with two different .",
"And with the two different extraction models, we can construct two different datasets for further training citation generation model.",
"To get the two different datasets, we use all 600 data to train two final extraction models.",
"We call the extraction model with =0.1 EXT =0 .",
"1 and call the extraction model with =0.9 EXT =0 .",
"9 .",
"The results on external test data when using full training data are shown in Table",
"4. 5 Final Evaluation Datasets With the two implicit citation extraction models we trained in the previous section, we construct three datasets for experiments.",
"In each dataset, a data example is a triple: [citing paper's context, cited paper's abstract, gold citation text].",
"The first dataset is an explicit citation text generation dataset ( Explicit dataset ).",
"The gold citation text in the training data and test data is single explicit citation sentence.",
"Note that the explicit citation sentences in the training data are automatically extracted with rules and the explicit citation sentences of test data are human annotated.",
"The second dataset is a full citation text generation dataset.",
"The gold full citation texts of test data are human annotated.",
"The gold full citation text of training data is constructed as follows: the gold explicit citation text is extracted with rules and the gold implicit citation text is extracted with EXT =0 .",
"1 .",
"This extraction model gets higher recall, so we call this dataset high-recall full citation text generation dataset ( HR dataset ).",
"The third dataset is also a full citation text generation dataset, and it is constructed in the same way with the second dataset except that the gold implicit citation text of training data is extracted with EXT =0 .",
"9 and we call it high-precision full citation text generation dataset ( HP dataset ).",
"The cited paper's abstract in all the three datasets refers to the abstract of the cited paper B. We use it to represent the content of paper B because the whole article is too long to encode.",
"The citing paper's context in all the three datasets refer to the sentences around the gold citation text in citing paper A. we take three sentences before the gold citation text and three sentences after it as the context.",
"Note that all the context sentences must be in the same section as the gold citation text.",
"Finally, we have three datasets for experiments: Explicit dataset : This dataset is built for explicit citation text generation.",
"The test set contains 400 examples with human-annotated explicit citation texts and the training set contains 600 examples with human-annotated explicit citation texts and 85,052 examples with explicit citation texts extracted based on rules.",
"The average lengths of explicit citation texts in the training and test sets are 29.64 words and 27.14 words, respectively.",
"HR dataset : This dataset is built for full citation text generation.",
"The test set contains 400 examples with human-annotated full citation texts and the training set contains 600 examples with human-annotated full citation texts and 85,052 examples with automatically extracted full citation texts (particularly using EXT =0 . 1 to extract implicit citation sen-tences).",
"The average lengths of full citation texts in the training and test sets are 43.50 words and 42.75 words, respectively.",
"HP dataset : This dataset is similar to HR dataset , and EXT =0 .",
"9 is used to automatically extract implicit citation sentences in the training dataset.",
"The average lengths of full citation texts in the training and test sets are 39.77 words and 42.75 words, respectively.",
"Our citation text generation model is a multi-source pointer-generator network with cross attention mechanism.",
"Because the citation generation task has two input sequences, we use two encoders to encode them separately and allow the model to copy words from both input sequences.",
"Such a multi-source pointer-generator network does not have the ability to model the relationship between two input sequences, so we add a cross attention mechanism on them.",
"The cross attention mechanism calculates the attention distribution of every word to the other sequence of words.",
"These attention distributions are used to help the decoder.",
"We believe that the citing paper's context can tell the model what information in cited paper's abstract is important and vice versa.",
"The structure of the whole model is shown is Figure",
"1. 6.1 Pointer-Generator Network A typical seq2seq model with attention mechanism has three components: an encoder , a decoder and an attention network.",
"The input text is seen as a sequence of words { w 1 , w 2 , ...w n } .",
"The encoder which is a single-layer bidirectional LSTM network receives input words one by one and produces a sequence of encoder hidden states { h i } .",
"At each decoding step t , the decoder which is a single-layer unidirectional LSTM receives the previous word and produces decoder state s t .",
"The attention distribution a t is calculated as in (Bahdanau et al., 2014): e ti = v T tanh ( W h h i + W s s t + b attn ) (1) a t = softmax ( e t ) (2) where v , W h , W s and b attn are learnable parameters.",
"At each decoding step t , the attention vector a t is used to calculate the context vector c t : c t = (cid:88) i a ti h i (3) Citing Paper's ContextEncoder Cited Paper's AbstractEncoder Match Matrix relationshipvectors RowAttentionMatrix ColumnAttentionMatrix dw1 Decoder relationshipvectors Attention cw1 softmax on column vectors softmax on row vectors cw2 cw3 aw1 aw2 aw3 dw2 dw3 <s> dw1 dw2 Attention Figure 1: The structure of our generation model The context vector c t and the decoder state s t are used to produce the vocabulary distribution P v : P v = softmax ( V 2 ( V 1 [ s t , c t ] + b ) + b (cid:48) ) (4) where V 1 , V 2 , b and b (cid:48) are learnable parameters.",
"P v is a probability distribution over all words in the vocabulary.",
"During training, we use P v to calculate the cross entropy loss.",
"At each decoding step, this network can generate word like normal seq2seq model or copy word from the source sequence.",
"The generation probability p gen for timestep t is: p gen = ( W Tc c t + W Ts s t + W Tx x t + b ptr ) (5) where c t is the context vector, s t is the decoder state, x t is the decoder input, W c , W s , W x and b ptr are learnable parameters and is the sigmoid function.",
"p gen is used as a soft switch to choose between generating a word from the vocabulary or copying a word from input sequence.",
"For each text, we define an extended vocabulary which is the union of the vocabulary and all words appearing in the source text.",
"We obtain the following probability distribution over the extended vocabulary: P ( w ) = p gen P v ( w ) + (1 p gen ) i : w i = w a ti (6) Note that if w is not in the vocabulary, P v ( w ) is zero.",
"Then we use the probability distribution over the extended vocabulary to calculate the loss.",
"Then we introduce our generation model.",
"Firstly we change the pointer-generator network to a multi-source pointer-generator network.",
"The multi-source pointer-generator network has two encoders and one decoder.",
"The two encoders encode the citing paper's context and cited paper's abstract separately.",
"The input context of citing paper is seen as a sequence of words { cw 1 , cw 2 , ..., cw n } and the input cited paper's abstract is seen as a sequence of words { aw 1 , aw 2 , ..., aw m } .",
"We use the same notation to represent both a word and its embedding vector.",
"The context is encoded by corresponding encoder to a sequence of encoder hidden states { ch i } and the cited paper's abstract is encoded to a sequence of encoder hidden states { ah j } .",
"At each decoding step t , we calculate attention vectors { ac ti } , { as ti } and corresponding context vectors c 1 t , c 2 t separately as described in equations (1), (2) and (3).",
"To make the model copy words from both two encoders, we change equation (5) to: [ p gen , p copy 1 , p copy 2 ] = softmax ( W Tc 1 c 1 t + W Tc 2 c 2 t + W Ts s t + W Tx x t + b ptr ) (7) where p gen is the probability of generating words, p copy 1 is the probability of copying words from citing paper's context and p copy 2 is the probability of copying words from cited paper's abstract.",
"And equation (6) needs to be changed to: P ( w ) = p gen P v ( w ) + p copy 1 i : cw i = w ac ti + p copy 2 i : aw i = w as ti (8) Then we add the cross attention mechanism to the multi-source pointer-generator network.",
"By making citing paper's context and cited paper's abstract attend to each other, we capture the relationships between them.",
"First, we calculate a match matrix M between the sequence of context's states { ch i } and the sequence of cited paper's abstrac-t's states { ah j } .",
"The element of the match matrix M i,j is: M i,j = ch i ah j (9) Then we apply softmax function on the row vectors of the matrix and get an attention matrix A row .",
"The row vector A rowi of the attention matrix is: A rowi = sotmax ([ M i, 1 , M i, 2 , ..., M i,m ]) (10) The vector A rowi represents the attention of word cw i to the sequence of words { aw 1 , aw 2 , ..., aw m } .",
"We also apply softmax function on the column vectors of the matrix and get another attention matrix A column .",
"The column vector of the attention matrix A columni represents the attention of word aw i to the sequence of words { cw 1 , cw 2 , ..., cw n } .",
"With the two attention matrices, we calculate two special sequences of vectors.",
"The first sequence of vectors { r 1 , r 2 , ..., r n } is calculated as: r i = mj =1 A rowi,j aw j (11) The second sequence { q 1 , q 2 , ..., q m } is calculated as: q j = ni =1 A columni,j cw i (12) The vector r i represent what the word cw i thinks about the sequence of words { aw 1 , aw 2 , ..., aw m } , while the vector q j represents what the word aw j thinks about the sequence of words { cw 1 , cw 2 , ..., cw n } .",
"We believe that the two sequences of vectors can model the relationship between the input citing paper's context and cited paper's abstract, so we call them relationship vectors.",
"With these two sequences of relationship vectors, we calculate two new context vectors c 3 t and c 4 t separately at each decoding step t , by replacing the encoder hidden state h i with the relationship vector r i or q j in equations (1) (2) and (3).",
"Finally, we calculate the vocabulary distribution with all four context vectors.",
"We just need to change equation (4) to: P v = softmax ( V 2 ( V 1 [ s t , c 1 t , c 2 t , c 3 t , c 4 t ] + b ) + b (cid:48) ) (13) The final probability distribution over the extended vocabulary is still calculated as equation (8).",
"The baseline models include: RandomSen : It randomly selects a sentence from the abstract of paper B.",
"MaxSimSen : It selects a sentence from the abstract of paper B, which has the largest similarity with the context of A.",
"EXT-ORACLE : It can be viewed as an upper bound for extractive models.",
"It creates an oracle citation text by selecting the best possible sentence from the abstract of paper B that gives the highest ROUGE with respect to the gold text.",
"COPY-CIT : It randomly copies one citation text from the papers in the training dataset which also cite the paper B. PTGEN : It is a pointer-generator network which allows both copying words via pointing and generating words from a fixed vocabulary.",
"When using this model, we concatenate the citing paper's context and the cited paper's abstract as the input sequence.",
"Our proposed model is called PTGEN-Cross .",
"Both our model and the PTGEN has 256-dimensional hidden states and 128-dimensional word embeddings.",
"The vocabulary size is set to 50k.",
"At test time the citation texts are produced using beam search with beam size",
"4. 7.2 Results 7.2.1 Automatic Evaluation We evaluate our models with ROUGE (Lin, 2004), reporting the F 1 scores for ROUGE-1, ROUGE-2 Context ...They include entity approaches for local coherence which track the repetition and syntactic realization of entities in adjacent sentences [otherrefer] and content approaches for global coherence which view texts as a sequence of topics, each characterized by a particular distribution of lexical items [otherrefer].",
"The test results on three datasets are shown in Tables 5, 6 and 7, respectively.",
"On all three datasets, extractive models perform poorly.",
"Our baseline generation model PTGEN outperforms EXT-ORACLE which can be seen as a 'perfect' extractive system.",
"This is completely different from how these models preform on other summarization tasks like news document summarization.",
"We believe it shows the particularity of this task.",
"It not only requires the model to capture the important content of the cited paper, but also requires the model to capture the attitude of the citing paper to the cited paper.",
"The model not only needs to generate fluent and informative text, but also needs to ensure the contextual coherence.",
"Our proposed model PTGEN-Cross obviously outperforms the baseline model PTGEN.",
"This proves the effectiveness of the cross attention mechanism.",
"We think the cross attention mechanism helps the model capture the relationship between the citing paper's context and the cited paper's abstract.",
"The results on explicit citation text generation dataset are all higher than the results on the other two datasets, which means the task of explicit citation text generation is easier than the task of full citation text generation.",
"We think it is because the context of explicit citation sometimes contains some implicit citation sentences and these sentences can be very helpful to the generation of explicit citation text.",
"Another possible reason is that the quality of the training dataset for explicit citation generation is higher than the other two training datasets.",
"Because the test data of the two full citation text generation datasets is the same, we can compare the results of our model training on the two datasets.",
"The model trained on the high-recall dataset performs slightly better.",
"This tells us the coverage ability of the implicit citation extraction model is more important when constructing training dataset for citation generation.",
"We randomly sample 50 instances from the high-recall test set and perform human evaluation on them.",
"Three graduate students are employed to rate the citation text produced by each method in four aspects: readability (whether the citation text is flu-ent), content (whether the citation text is relevant to the cited paper's abstract), coherence (whether the citation text is coherent with the citing paper's context) and overall quality.",
"The rating score ranges from 1 to 5, and 1 means very bad and 5 means very good.",
"Note that every text is scored by three judges and we take take the average of three scores.",
"The results are shown in Table 9.",
"As is shown in the table, our model outperforms the baseline model, especially with respect to the coherence and overall aspects.",
"This further demonstrates the efficacy of our proposed model.",
"We show an example of generation in Table 8.",
"Note that all reference signs to the cited paper are masked as '[refer]' and all reference signs to other papers are masked as '[otherrefer]'.",
"The '[cit]' in bold in context indicates the position the citation text should be.",
"We can see that the citation text generated by our model is more contextual coherent because it can capture the relationship between context and the cited paper's abstract better.",
"Work",
"Qiaozhu Mei and ChengXiang Zhai.",
"2008.",
"Generating impact-based summaries for scientific literature.",
"In Proceedings of ACL-08: HLT , pages 816824.",
"In this paper we investigate the challenging task of automatic generation of citation texts in scholarly papers.",
"We annotate a dataset and train an implicit citation extraction model to automatically enlarge the training data.",
"we then propose the multi-source pointer-generation network with cross attention mechanism to deal with this task.",
"Empirical evaluation results on three datasets verify the efficacy of our proposed method.",
"In future work, we will consider introducing more information like the citation texts to the cited paper in other papers to help the generation.",
"This work was supported by National Natural Science Foundation of China (61772036), Tencent AI Lab Rhino-Bird Focused Research Program (No.JR201953) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology).",
"We thank the anonymous reviewers for their helpful comments.",
"Xiaojun Wan is the corresponding author."
] | [
"method",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"method",
"result",
"method",
"abstain",
"result",
"objective",
"method",
"objective",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"We present substructure distribution projection (SUBDP), a technique that projects a distribution over structures in one domain to another, by projecting substructure distributions separately.",
"Models for the target domain can then be trained, using the projected distributions as soft silver labels.",
"We evaluate SUBDP on zero-shot cross-lingual dependency parsing, taking dependency arcs as substructures: we project the predicted dependency arc distributions in the source language(s) to target language(s), and train a target language parser on the resulting distributions.",
"Given an English treebank as the only source of human supervision, SUBDP achieves better unlabeled attachment score than all prior work on the Universal Dependencies v2.2 (Nivre et al., 2020) test set across eight diverse target languages, as well as the best labeled attachment score on six languages.",
"In addition, SUBDP improves zero-shot cross-lingual dependency parsing with very few (e.g., 50) supervised bitext pairs, across a broader range of target languages.",
"Zero-shot cross-lingual dependency parsing is the task that requires prediction of dependency parses without seeing any parsing example in the target language; instead, the model may use annotated parses in other languages.",
"A popular line of work is annotation projection: the parses generated by a source language dependency parser are projected into the target language, where the projected parses are then used to train a new parser.",
"As illustrated in Figure 1b, most annotation projection methods typically output partial hard dependency trees, 1 where there either is or is not an arc between any pair of 1 Throughout this paper, we refer to dependency parse trees with 0/1 arc and label probabilities, i.e., conventional dependency trees, as hard trees ; in contrast, we refer to collections of per-word head distributions and per-arc label distributions with continuous probabilities as soft trees .",
"(b) Projection with only one-to-one alignments.",
"words.",
"In addition, most bitext-based work has relied on one-to-one word alignment between bitext pairs (e.g., I and in Figure 1; Ma and Xia, 2014; Lacroix et al., 2016; Rasooli et al., 2021, inter alia ), discarding information in many-to-one alignments (e.g., book store and in Figure 1).",
"In this work, we introduce substructure distribution projection (SUBDP; Figure 1a), where dependency arcs act as substructures.",
"We project substructure distributions, i.e., the conditional prob-6547 ability distribution of the corresponding head given a word.",
"2 When the source parse is a hard tree, SUBDP has the same behavior as prior work (e.g., Lacroix et al., 2016) for arcs that are only involved in one-to-one alignments; for many-to-one alignments, SUBDP projects the corresponding arcs into soft arc distributions in the target language.",
"Therefore in SUBDP, a target language word may have multiple heads in the projected trees, where their probabilities sum to one.",
"More generally, SUBDP may take dependency arc or label distributions (i.e., soft trees) in the source language(s), instead of hard trees, as the input.",
"As in annotation projection approaches, the projected soft trees are then used to train a target language parser.",
"We evaluate SUBDP on zero-shot cross-lingual dependency parsing with eight diverse languages from the Universal Dependencies v2.2 (Nivre et al., 2020), where the English treebank is the only source of human supervision.",
"Taking English as the source language, SUBDP significantly outperforms all baseline methods on all distant languages (Arabic, Hindi, Korean, and Turkish) in our experiments, in terms of both labeled attachment scores (LAS) and unlabeled attachment scores (UAS), while achieving superior UAS on all nearby languages (German, French, Spanish, and Italian) as well.",
"Further analysis shows that SUBDP also helps improve zero-shot cross-lingual dependency parsing with a small amount of supervised bitext, across a broader range of target languages.",
"Zero-shot cross-lingual dependency parsing.",
"3 Existing approaches can be classified into the following categories: 1. Delexicalized training (Zeman and Resnik, 2008; McDonald et al., 2011; Cohen et al., 2011; Durrett et al., 2012; Rosa and abokrtsk, 2015, inter alia ), which only considers delexicalized features (e.g., part-of-speech tags) in training.",
"2. Transfer with cross-lingual embeddings (Tckstrm et al., 2012; Guo et al., 2015; Schuster et al., 2019, inter alia ), which assumes that cross-lingual word representations, including word clusters (Tckstrm et al., 2012; Ammar 2 Projection of the distribution over whole parse trees has been considered by Ma and Xia (2014), while SUBDP has a much lower time complexity see 2 for more discussion.",
"3 Also referred to as zero-shot dependency parsing in recent literature (Schuster et al., 2019; Wang et al., 2019).",
"et al., 2016), word type embeddings (Guo et al., 2015, 2016; Duong et al., 2015; Ammar et al., 2016; Wick et al., 2016), or contextualized crosslingual word embeddings (Schuster et al., 2019; Wang et al., 2019; He et al., 2019; Ahmad et al., 2019a,b), provide shared features for words with similar syntactic roles.",
"3. Treebank translation , which translates treebanks in the source language(s) into the target language(s) (Tiedemann et al., 2014; Tiedemann, 2015; Tiedemann and Agic, 2016) or a code-switching mode (Zhang et al., 2019), and uses them to train target language parsers.",
"4. Annotation projection , 4 which trains a parser in the source language(s), and projects the predicted source language parse trees to target language(s) using bitext (Hwa et al., 2005; Ma and Xia, 2014; Agic et al., 2016).",
"Additional strategies are usually used to improve the projection quality, such as keeping confident edges only (Li et al., 2014; Lacroix et al., 2016), projection from multiple source languages (Tck-strm et al., 2013; Agic et al., 2016; Rasooli and Collins, 2017), density based iterative filtering (Rasooli and Collins, 2015, 2017, 2019), and noisy self-training (Kurniawan et al., 2021).",
"These approaches make different assumptions on annotation availability, such as gold part-of-speech tags (Zeman and Resnik, 2008; Cohen et al., 2011; Durrett et al., 2012, inter alia ), a reasonably good translator, which uses extra annotated data in the training process (Tiedemann et al., 2014; Tiedemann, 2015; Zhang et al., 2019), high-quality bilingual lexicons (Durrett et al., 2012; Guo et al., 2015, 2016, inter alia ), or language-specific constraints (Meng et al., 2019).",
"Most bitext-based work assumes annotated bitext (Ma and Xia, 2014; Li et al., 2014; Lacroix et al., 2016, inter alia ) or bitext constructed from extra signals (e.g., Wikipedia; Rasooli et al., 2021).",
"However, He et al. (2019), Schuster et al. (2019), Ahmad et al. (2019a,b), and Kurniawan et al. (2021) only require minimal annotations (i.e., source language treebanks and unlimited raw text in relevant languages).",
"We are mainly interested in the minimal annotation setting, and will compare to this line of work.",
"Our proposed method, SUBDP, falls into the category of annotation projection.",
"Some of the 4 We use annotation projection to denote the projection of predicted parses following Rasooli and Collins (2019) and Zhang et al. (2019), and treebank translation for the projection of human-annotated trees following Tiedemann et al. (2014).",
"benefits of SUBDP relative to prior work are that it works well with minimal annotations, allows soft word alignment (3.2), supports both labeled and unlabeled parsing, and has a low time complexity O ( n 2 ) for non-projective parsing.",
"5 SUBDP can be easily extended to other tasks, such as sequence labeling, where we can define substructures (Shi et al., 2021a) and substructure distributions.",
"Multilingual contextualized representations.",
"Recent contextualized models pretrained on multilingual text (Devlin et al., 2019; Conneau et al., 2020; Tran et al., 2020, inter alia ) are effective across a wide range of cross-lingual NLP tasks, including bitext retrieval (Tran et al., 2020), bilingual lexicon induction (Shi et al., 2021b), cross-lingual named entity recognition (Pires et al., 2019; Mulcaire et al., 2019), and cross-lingual dependency parsing (Schuster et al., 2019; Wang et al., 2019).",
"In this work, we apply two of the contextualized pretrained models, XLM-R (Conneau et al., 2020) and CRISS (Tran et al., 2020) to generate unsupervised bitext.",
"Soft-label methods.",
"Calculating the cross entropy loss between model output and a soft distribution (instead of one-hot labeles) has been applied to knowledge distillation (Hinton et al., 2015; You et al., 2017; Sanh et al., 2019, inter alia ), crosslingual named entity recognition (Wu et al., 2020), and for handling annotation discrepancy (Forna-ciari et al., 2021).",
"Our approach is a type of soft-label method, with additional post processing to the output of the original models.",
"Our pipeline for zero-shot cross-lingual dependency parsing consists of three steps: (1) train a bi-affine dependency parser P 1 in the source language L 1 , (2) project annotations on L 1 sentences to their parallel sentences in the target language L 2 (3.3), and (3) train another bi-affine dependency parser P 2 for L 2 (3.4).",
"We first present some background (3.1) and preliminaries (3.2).",
"parser.",
"For a sentence with n words (cid:104) w 1 , . . . , w n (cid:105) , 6 we denote the word features when acting as heads and dependents by 5 In contrast, Ma and Xia (2014) require O ( n 4 ) time for non-projective unlabeled dependency parsing.",
"H (cid:82) n d h and D (cid:82) n d d respectively, where d h and d d denote the dimensionality of the corresponding features.",
"The probability of word w i having head w j can be formulated as an n -way classification problem: S (arc) = DW (arc) H (cid:124) (1) P ( w j | w i ) = exp (cid:16) S (arc) i,j (cid:17) (cid:80) nk =1 exp (cid:16) S (arc) i,k (cid:17) , (2) where W (arc) (cid:82) d d d h is the parameters of the bi-affine module.",
"7 Given log P ( w j | w i ) for every pair of i and j , the dependency trees can be inferred by finding the spanning arborescence of maximum weight using the ChuLiu-Edmonds algorithm (Chu and Liu, 1965; Edmonds, 1968).",
"We use the algorithm proposed by Tarjan (1977), which has an O ( n 2 ) time complexity for each sentence.",
"We denote the candidate dependency label set by L .",
"Parameterized by W (label) (cid:82) d d d h | L | , we define the probability that the arc from head w j to dependent w i has the label (cid:96) by S (label) i,j,(cid:96) = (cid:88) p (cid:88) q D i,p W (label) p,q,(cid:96) H j,q P ( (cid:96) | w j w i ) = exp (cid:16) S (label) i,j,(cid:96) (cid:17) (cid:80) | L | k =1 exp (cid:16) S (label) i,j,k (cid:17) , (3) Given the probability definitions above, the model is trained to maximize the log likelihood of the training data.",
"More details can be found in Dozat and Manning (2017).",
"We use bi-affine dependency parsers as the backbone for all parsers in this work, though it is worth noting that SUBDP works for any parser that produces a set of arc and label distributions.",
"CRISS.",
"CRISS (Tran et al., 2020) is an unsupervised machine translation model trained with monolingual corpora, starting from mBART (Liu et al., 2020), a multilingual pretrained sequence-to-sequence model with a mask-filling denoising objective.",
"During the training process, CRISS iteratively (1) encodes sentences in the monolingual corpora with its encoder, (2) mines bitext based on encoding similarity, and (3) uses the mined bitext to fine-tune the model with a machine translation objective.",
"In this work, we use CRISS to generate 7 While Eq (1) is in a bi-linear form, in practice, we can always append a constant feature column to both H and D , resulting in a bi-affine model.",
"unsupervised translation of English sentences to construct bitext, and apply its encoder to extract word features for an ablation study.",
"SimAlign.",
"SimAlign (Jalili Sabet et al., 2020) is a similarity based word aligner: given a pair of source and target sentence (cid:104) s, t (cid:105) , SimAlign computes a contextualized representation for each token in both s and t using multilingual pretrained models (Devlin et al., 2019; Conneau et al., 2020), and calculates the similarity matrix S , where S i,j represents the cosine similarity between tokens s i and t j .",
"The argmax inference algorithm selects position pairs (cid:104) i, j (cid:105) , where S i,j is both horizontal and vertical maximum, and outputs the word pairs corresponding to such position pairs as the word alignment.",
"In this work, we use XLM-R (Conneau et al., 2020) based SimAlign with the argmax algorithm to extract word alignment for SUBDP.",
"It is worth noting that pretrained multilingual models usually use byte-pair encodings (BPEs; Gage, 1994), a more fine-grained level than words, for tokenization.",
"The argmax algorithm may therefore generate many-to-one alignments.",
"More details can be found in Jalili Sabet et al. (2020).",
"Unlike bitext based word alignment (Och and Ney, 2003; Dyer et al., 2013), SimAlign does not require any bitext to produce high quality alignments, and therefore better fits the low-resource scenario with very few bitext pairs available.",
"Dependency annotations in L 1 .",
"As in the most common data settings for supervised dependency parsing, we assume access to sentences with dependency annotations: for a sentence (cid:104) w 1 , . . . , w n (cid:105) , there is a dummy word w 1 , whose unique dependent is the root word; every other word w i is labeled with h i and r i , denoting that the head of w i is w h i , with the dependency relation r i .",
"We use these annotations to train an L 1 bi-affine dependency parser P 1 , following the procedure described in 3.1.",
"Bitext.",
"We denote the available m pairs of bitext by B = {(cid:104) s ( k ) , t ( k ) (cid:105)} mk =1 , where { s ( k ) } and { t ( k ) } are sentences in L 1 and L 2 respectively.",
"Word alignment.",
"For a bitext pair (cid:104) s, t (cid:105) , we generate the word alignment matrix A { 0 , 1 } | s || t | with SimAlign, where A i,j = 1 denotes that there exists an alignment between s i and t j .",
"We would like the word alignment matrices to be right stochastic, i.e., (1) each element is nonnegative and (2) each row sums to one, to ensure that the results after projection remain distributions.",
"To handle words that have zero or more than one aligned words in the other language, we introduce the following two matrix operators.",
"The add-dummy-position operator ( ) : : (cid:82) r c (cid:82) ( r +1) ( c +1) ( r, c (cid:78) + ) ( M ) i,j = M i,j (1 i r, 1 j c ); ( M ) i,c +1 = (cid:48) [ M i, 1 , . . . , M i,c ](1 i r ); ( M ) r +1 ,j = 0(1 j c ); ( M ) r +1 ,c +1 = 1 , where (cid:48) [ ] = 1 when all input values are zero and otherwise 0 .",
"The row normalization operator NR ( ) : NR : (cid:82) r c (cid:82) r c ( r, c (cid:78) + ) NR ( M ) i,j = M i,j (cid:80) (cid:96) M i,(cid:96) .",
"Intuitively, the added dummy positions correspond to null words in the word alignment literature (Dyer et al., 2013; Schulz et al., 2016; Jalili Sabet et al., 2020, inter alia ).",
"We denote the source-to-target alignment matrix by A s t = NR (cid:16) ( A ) (cid:17) , and the target-to-source alignment matrix by A t s = NR (cid:16) ( A (cid:124) ) (cid:17) .",
"stochastic matrices by definition.",
"Arc distribution projection.",
"We consider a pair of bitext (cid:104) s, t (cid:105) .",
"Let P 1 ( s j | s i ) denote the arc probability produced by the parser P 1 .",
"Like the dummy position notation, we specify a dummy ( | s | + 1) th word whose head is itself, that is, P 1 ( s i | s | s | +1 ) = 0 , P 1 ( s | s | +1 | s | s | +1 ) = 1 .",
"We project P 1 ( | ) to P 2 ( t q | t p ) , the arc probability distributions in the parallel L 2 example t , P 2 ( t q | t p )= | s | +1 (cid:88) i =1 | s | +1 (cid:88) j =1 A t s p,i P 1 ( s j | s i ) A s t j,q .",
"(4) It is guaranteed that P 2 ( | t p ) is a distribution for any t p a proof can be found in Appendix A.1.",
"Note that if we adopt matrix notations, where we denote P 2 ( t q | t p ) by P (2) p,q and denote P 1 ( s j | s i ) by P (1) i,j , Eq (4) is equivalent to P (2) = A t s P (1) A s t .",
"Label distribution projection.",
"Let P 1 ( (cid:96) | s j s i ) denote the label probability produced by P 1 .",
"For dummy positions, we simply add a uniform distribution, that is, P 1 ( (cid:96) | s j s i ) = 1 L if i or j = | s | + 1 .",
"We project P 1 ( | ) to P 2 ( (cid:96) | t q t p ) , the label distributions in the parallel L 2 example t , by P 2 ( (cid:96) | t q t p )= | s | +1 (cid:88) i =1 | s | +1 (cid:88) j =1 A t s p,i P 1 ( (cid:96) | s j s i ) A t s q,j P 2 ( | t q t p ) is provably a distribution for any pair of t p and t q (see Appendix A.2).",
"We train another bi-affine dependency parser P 2 on language L 2 , by minimizing the cross entropy between its produced probability P 2 and the soft silver labels P 2 .",
"Note that the added dummy word denoting the null alignment is not eventually used in the final dependency inference process and may introduce extra noise to the model, so we instead calculate the partial cross entropy loss, which does not consider elements involving dummy words.",
"Concretely, we compute the partial arc cross entropy loss for one example t as follows: L ( t ) arc ( P 2 , P 2 )= | t | (cid:88) p =1 | t | (cid:88) q =1 P 2 ( t q | t p ) log P 2 ( t q | t p ) Similarly, the partial label cross entropy loss can be computed as follows: L ( t ) label ( P 2 , P 2 ) = | L | (cid:88) (cid:96) =1 | t | (cid:88) p =1 | t | (cid:88) q =1 P 2 ( (cid:96) | t q t p ) log P 2 ( (cid:96) | t q t p ) Finally, we train the parameters of P 2 to minimize (cid:88) (cid:104) s,t (cid:105)B L ( t ) arc ( P 2 , P 2 ) + L ( t ) label ( P 2 , P 2 ) .",
"Throughout all experiments, the subword representation is a weighted sum of layer-wise representation from a frozen pretrained model, where each layer has a scalar weight optimized together with other network parameters to minimize Eq.",
"(5).",
"We convert subword features to word features by endpoint concatenation, following Toshniwal et al. (2020).",
"We use the Adam optimizer (Kingma and Ba, 2015) to train all models, where the source language parser is trained for 100 epochs with initial learning rate 2 10 3 following the baseline implementation by Zhang et al. (2020), and the target language parser is trained for 30 epochs with initial learning rate 5 10 4 .",
"8 We use the loss against silver projected distributions on the development set for SUBDP and the development LAS against projected trees for baselines for early stopping.",
"9 For evaluation, we ignore all punctuation following the most common convention (Ma and Xia, 2014; Rasooli and Collins, 2015; Kurniawan et al., 2021, inter alia ).",
"If not specified, All models in target languages are initialized with the trained source language parser.",
"All word alignments are obtained by XLM-R based SimAlign (Jalili Sabet et al., 2020), using BPE tokenization and the argmax algorithm.",
"XLM-R is used as the feature extractor.",
"We compare SUBDP to prior work in the minimal annotation setting (Table 1), where an English dependency treebank is the only annotation that involves human effort.",
"We select target languages from the overlap between those considered by Kurniawan et al. (2021), those covered by XLM-R (Conneau et al., 2020) training corpora, and those supported by CRISS (Tran et al., 2020), resulting in eight languages: Arabic (ar), Hindi (hi), Korean (ko), Turkish (tr), German (de), Spanish (es), French (fr), and Italian (it).",
"We translate English sentences using the unsupervised model CRISS to construct the required bitext.",
"10 To ensure the quality of the unsupervised bitext, we discard (1) translations where at least 80% of words appear in the corresponding source sentences, which are likely to be copies, (2) those 8 We do not observe further training loss decrease when training for more epochs.",
"The learning rate for SUBDP is tuned to optimize the development loss for German, where the German gold trees remain unused.",
"9 SUBDP does not provide a set of hard silver trees for LAS and UAS calculation.",
"containing a CRISS language token other than the target language, which are likely to be false translation into another language, and (3) those with 80% or more words appearing in the translated sentence more than once, which are likely to be repetitions.",
"Transferring from an English parser, SUBDP achieves the best UAS across all eight target languages, and the best LAS on six languages out of eight.",
"In addition, we find that SUBDP is consistent across random seeds, with a standard deviation less than 0 .",
"8 for every number in Table 1. 4.2 Ablation Study We introduce the following baselines with the same annotated data availability for an ablation study: 1. Direct transfer of English models (DT).",
"We train a bi-affine dependency parser on English treebanks, and test the model on other languages.",
"This approach is expected to outperform a random baseline as it has a pretrained cross-lingual language model-based feature extractor, which may implicitly enable cross-lingual transfer.",
"For this baseline, we test both XLM-R and CRISS encoders, as SUBDP benefits from both models.",
"2. Self-training (ST).",
"Following Kurniawan et al. (2021), we apply an XLM-R DT parser to the target language, 11 and train another parser on the predicted hard trees.",
"3. Hard projection (Hard).",
"It is intuitive to compare SUBDP against the hard tree projection baseline (Lacroix et al., 2016), where we use the same set of bitext and alignments to project trees to the target languages, keeping only the edges with both sides aligned in a one-to-one alignment.",
"We use the projected trees to train a parser in the target language.",
"11 We only consider XLM-R as the feature extractor for ST as it achieves better average DT results.",
"4. Random target parser initialization (RandI).",
"Instead of using the trained English model as the initialization of target parsers, we randomly initialize the weights in this baseline.",
"This approach matches with SUBDP in every component except the target parser initialization.",
"All of the baselines use bi-affine dependency parsers, with pretrained cross-lingual language models (XLM-R or CRISS) as feature extractors.",
"Across all languages, SUBDP significantly outperforms DT with either XLM-R or CRISS word feature extractor.",
"ST does improve over DT consistently, but is much less competitive than SUBDP.",
"This indicates that the gain of SUBDP over prior work is not simply from more powerful word features.",
"While hard treebank projection using the method proposed by Lacroix et al. (2016) is quite competitive, SUBDP consistently produces competitive (Arabic, German, Spanish) or better (Hindi, Korean, Turkish, French, Italian) results.",
"Comparing SUBDP to RandI, we find that initializing the target language parser with a trained source language (English in this work) parser helps improve performance across the board; therefore, source parser initialization should be considered as a general step in future work on zero-shot cross-lingual dependency parsing.",
"Since most existing work has used only one-to-one alignment for annotation projection (Ma and Xia, 2014; Lacroix et al., 2016; Rasooli et al., 2021, inter alia ), we would like to analyze the effect of introducing many-to-one alignment edges in SUBDP.",
"We filter SimAlign BPE argmax to obtain a more conservative version, dropping all many-to-one edges (i.e., those that have a word linked to multiple edges), 12 and compare it to the BPE argmax algorithm (Table 2).",
"While the confident one-to-one alignment achieves further improvement on Arabic and all four nearby languages, we find that the many-to-one BPE argmax alignment is important to the superior transfer performance on Hindi, Korean, and Turkish.",
"Given the fact that the scores are quite similar for Arabic, the results generally suggest using the many-to-one SimAlign BPE argmax alignments for transferring from English to distant languages, while using the more confident one-to-12 This approach is different from Hard as it takes soft source trees as the input, yielding soft target trees as silver labels to train target language parsers.",
"Following Schuster et al. (2019), we use Universal Dependencies v2.0 (McDonald et al., 2013) to evaluate zero-shot cross-lingual transfer from multiple source languages (Table 3).",
"13 For each language among German (de), Spanish (es), French (fr), Italian (it), Portuguese (pt), and Swedish (sv), annotated treebanks from all other languages and English can be used for training and development purposes.",
"For SUBDP, we generate bitext from all applicable source languages with CRISS.",
"SUBDP outperforms the previous state-of-the-art on German by 13.5 LAS, but under-performs the DT baseline on the other three languages.",
"However, if we start with a trained SUBDP parser for a target language, and use the standard training data (i.e., treebanks in other languages) to further train a bi-13 We do not report performance for Portuguese and Swedish as they are not covered by CRISS; however, the annotated treebanks in these languages are used as source treebanks when applicable.",
"affine dependency parser (DT w/ SUBDP init.), we are able to achieve better results than DT across the board, obtaining competitive or even better LAS than methods that use extra annotations other than source treebanks (Zhang and Barzilay, 2015; Guo et al., 2016).",
"We further evaluate SUBDP in another scenario where a few bitext pairs are available.",
"We consider a larger set of eighteen target languages, including Arabic (ar), Czech (cs), German (de), Spanish (es), Finnish (fi), French (fr), Hindi (hi), Hugarian (hu), Italian (it), Japanese (ja), Korean (ko), Norwegian (no), Portuguese (pt), Russian (ru), Tamil (ta), Telugu (te), Vietnamese (vi), and Chinese (zh).",
"We transfer from English to each target language with Wikimatrix bitext (Schwenk et al., 2021), where the examples are mined with an encoding similarity based bitext miner trained with annotated bitext.",
"We vary the number of Wikimatrix bitext pairs, selecting the number of pairs within the geometric sequence { 50 2 k } 9 k =0 , leaving 10% of the examples for development.",
"On average and for nearby languages (Figure 3), we find that the performance of SUBDP with 50 pairs of bitext is quite close to that with 25K pairs of bitext.",
"Although some distant languages generally require more bitext for further improvement, SUBDP outperforms the direct transfer baseline by a nontrivial margin with a small amount (e.g., 800-1.6K pairs) of bitext.",
"Our work is in line with recent work (Rasooli et al., 2021) which shows that cross-lingual transfer can be done effectively with weak supervision such as Wikipedia links.",
"Our results go further and study the setting of zero additional supervision beyond the source language treebank, demonstrating the potential of zero-shot cross-lingual dependency parsing with zero additional supervision, even between distant languages that do not share vocabulary or subwords.",
"Our work suggests a new protocol for dependency annotation of low-resource languages: (1) train a pretrained multilingual model following existing work such as XLM-R (Conneau et al., 2020) and CRISS (Tran et al., 2020), (2) annotate a small number of bitext pairs or generate bitext with trained unsupervised translation models, and (3) train a zero-shot cross-lingual dependency 25354555657585 avg.",
"Our contribution to zero-shot cross-lingual dependency parsing is arguably orthogonal to contextualized representation alignment (Schuster et al., 2019; Wang et al., 2019), where pretrained multilingual language models are finetuned for better transfer.",
"In contrast, we use the frozen pretrained models to extract features.",
"In addition, projection quality controls by heuristic rulebased filtering (Rasooli and Collins, 2015) may also be combined with SUBDP to further improve the performance.",
"Our results, on the other hand, demonstrate that multilingual pretrained models may have more applications beyond representation-based direct transferinformation extracted from these models without further supervision (e.g., word alignment in this work) may further benefit downstream tasks 6554 (e.g., zero-shot cross-lingual dependency parsing in this work) with appropriate usage.",
"While this work depends on pretrained multilingual models such as CRISS (Tran et al., 2020), which require extensive computational resources to train from scratch, SUBDP may be applied whenever bitext alignment and cross-lingual word embeddings are available.",
"In addition, the required pretrained cross-lingual models are useful for general purposes, and can be applied to other downstream NLP tasks.",
"We suggest that SUBDP can be extended to other scenarios wherever relevant parallel signals are available, such as cross-lingual named entity recognition, cross-lingual constituency parsing, or zero-shot scene graph parsing for images using only the dependency supervision in text.",
"We leave the further exploration of SUBDP on other tasks for future work.",
"We thank Kartik Goyal and Shubham Toshniwal for helpful suggestions on this work.",
"This work is supported in part by a Google Fellowship to FS."
] | [
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"method",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Typical generative dialogue models utilize the dialogue history to generate the response.",
"However, since one dialogue utterance can often be appropriately answered by multiple distinct responses, generating a desired response solely based on the historical information is not easy.",
"Intuitively, if the chatbot can foresee in advance what the user would talk about (i.e., the dialogue future) after receiving its response, it could possibly provide a more informative response.",
"Accordingly, we propose a novel dialogue generation framework named ProphetChat that utilizes the simulated dialogue futures in the inference phase to enhance response generation.",
"To enable the chatbot to foresee the dialogue future, we design a beam-search-like roll-out strategy for dialogue future simulation using a typical dialogue generation model and a dialogue selector.",
"With the simulated futures, we then utilize the ensemble of a history-to-response generator and a future-to-response generator to jointly generate a more informative response.",
"Experiments on two popular open-domain dialogue datasets demonstrate that ProphetChat can generate better responses over strong baselines, which validates the advantages of incorporating the simulated dialogue futures.",
"Recent years have witnessed a surge of interest in building open-domain chatbots using generative approaches (Shang et al., 2015; Zhao et al., 2017; Tao et al., 2018; Zhang et al., 2020).",
"These prevailing methods typically utilize dialogue histories as the dialogue context to generate the response via maximum likelihood estimation.",
"Different from Corresponding authors: Dongyan Zhao and Rui Yan.",
"directed text generation tasks like machine translation where the target sentences are strictly constrained by the source sentence (Holtzman et al., 2019), the dialogue history and the response in chitchat conversations are loosely coupled (Feng et al., 2020a).",
"In other words, open-domain chatbots often have more freedom\" to decide what to respond since there often exists multiple distinct responses that can appropriately answer the given utterance. However, we argue that such excessive freedom\" also reveals that the dialogue history only may not contain enough information to generate a desired response that is informative and easy to reply to.",
"If provided with enriched dialogue contexts that contain more useful dialogue cues, it could be easier for the model to chat with a human.",
"So here comes the questions: what kind of dialogue cues is complementary to dialogue histories, and how to obtain and use them.",
"Recent studies in representation learning have demonstrated that when representing a token in a sentence, considering the tokens on its right side in addition to its left side can bring significant improvement (Devlin et al., 2019; Yang et al., 2019).",
"Similar findings also appear in directed text generation where the future tokens on the right side can be beneficial to generate the current token (Serdyuk et al., 2017; Zhang et al., 2018c; Chen et al., 2020; Qi et al., 2020).",
"Sharing the same spirit, we pursue to use the right side\" information, which is the dialogue future in our task, as the complementary dialogue cue to enhance the generation of the current response. Intuitively, if the chatbot can be told in advance what the user would probably talk about (i.e., the dialogue future) after receiving its response, it only needs to provide a response that can smoothly connect the history and the future. To verify whether the dialogue future can act as the complementary dialogue cue, we conduct empirical studies. We find that using a dialogue generation model to learn the reverse dialogue flow (i.e., using the future to generate the response) is quite effective. Furthermore, when utilizing the ensemble of the history-to-response generation model and the future-to-response generation model to generate the response conditioned on both the history and the gold future, the quality of the generated response surely improves. Though effective, the ground truth dialogue future is inaccessible in the inference phase. Therefore, all existing works in this line choose to leverage the dialogue future only in the training phase (Shen et al., 2018; Feng et al., 2020a,b), leaving the inference phase unchanged. We argue that explicitly providing the possible dialogue futures in the inference phase can offer more direct help for the generation of the current response. To enable the incorporation of dialogue futures into response generation in the inference phase, we propose a response generation framework namely ProphetChat by answering two questions: how to acquire the future and how to use it. Figure 1 shows the generation process of ProphetChat. It consists of a history-to-response model (denoted as the forward model), a future-to-response model (denoted as the backward model), an ensemble gate, and a dialogue selector. Given a dialogue history, we first utilize an effective beam-search-like roll-out strategy to simulate possible dialogue futures. Concretely, the forward model first generates a batch of n possible responses based on the dialogue history. The dialogue selection model then comes to pick up the k -best responses. We further generate n possible futures for each of the picked responses, resulting in k n futures. The selection model again picks up the k -best futures which are of higher quality compared with randomly sampled ones. Next, conditioned on both the history and the simulated future, we employ the forward model and the backward model to jointly generate the response by summing the per-step output probability distributions of the two models using a calculated weight. The weight is obtained by a trainable gate that learns to balance the trade-off between history and future information. Finally, we gather the k responses generated solely based on the history and the k n responses generated based on both the history and the future, and use the selector to choose the top-ranked one as the final response. Since the ensemble generation model relies on the selector to sequentially select the response and the future, and the ensemble generation model also needs to learn how to balance the history and the future information given the selected future, we jointly train the whole model to make each module better collaborate with others to fulfill the ultimate goal: to maximize the likelihood of the gold response estimated by the ensemble generation model given the history and the selected future. We train the ensemble generation model directly using MLE objective while adopting reinforcement learning to tune the selector. Our contributions in this paper are three folds: We propose a novel dialogue generation framework named ProphetChat which leverages the simulated dialogue future to enhance response generation through the ensemble of the history-to-response generator and the future-to-response generator. To the best of our knowledge, we are the first to utilize the dialogue futures for response generation in the inference phase. To acquire better dialogue futures in the inference phase, we propose an effective beam-search-like roll-out strategy for dialogue future simulation with the help of a dialogue selector. We conduct comprehensive experiments on two popular open-domain dialogue datasets and the results verify the advantages of incorporating the simulated dialogue futures. 963 2 Related Work Dialogue System. Open-domain response generation has long been the research hot spot. Recently, various efforts have been made to generate informative and diverse responses by introducing effective architectures and learning objectives and by incorporating external knowledge. Zhao et al. (2017); Gu et al. (2018) applied CVAE to model the variability of responses. Li et al. (2016b); Zhang et al. (2018a); Saleh et al. (2020) adopted reinforcement learning to encourage the model to generate desired responses through carefully designed reward functions. Zhang et al. (2018b); Chan et al. (2019); Zheng et al. (2020); Li et al. (2021) exploited persona information to improve the coherence of the response. Zhou et al. (2018); Song et al. (2019); Shen and Feng (2020) considered emotions when generating the response. Dinan et al. (2018); Lian et al. (2019); Zhao et al. (2020a,b); Li et al. (2020) conditioned the response generation model with knowledge. Different from the above works that aimed to design specific history-to-response generation models, we propose a response generation framework where possible dialogue futures are utilized in the inference phase with the help of an effective future simulation strategy. Future Modeling. There are various scenarios where considering future information is useful. In text generation, Serdyuk et al. (2017) proposed a twin network to regularize the hidden states of the left-to-right decoder with the future-aware right-to-left decoder. Zhang et al. (2018c) used the target-side hidden states generated by the right-to-left decoder to help the right-to-left decoder during translation so that the target-side future information can help avoid under-translation. Different from these works that consider the right side tokens as the future for the current token, we define future\" as the next dialogue utterance of the current response in a dialogue session.",
"In response generation, Feng et al. (2020a) proposed to use gold futures as the conditions of two discriminators and adopted adversarial training to encourage diversity.",
"Feng et al. (2020b) employed gold dialogue futures to learn a future-aware teacher model and transferred the knowledge to a history-to-response student model via imitation learning.",
"These works only use the future information in the training phase, while we utilize the simulated dialogue future in the inference phase to provide the history-to-response generation model with direct help.",
"DialoGPT (Zhang et al., 2020) is a GPT-based response generation model pre-trained on large-scale open-domain dialogue corpus by maximizing the likelihood of the successive dialogue utterances (i.e., the forward dialogue flow) given the initial dialogue history.",
"While trained on the same corpus with the same architecture, DialoGPT-MMI is trained on the backward dialogue flow where the order of the utterances in a dialogue are reversed.",
"We adopt DialoGPT as the forward generator and DialoGPT-MMI as the backward generator.",
"GRADE (Huang et al., 2020) is a graph-enhanced dialogue evaluation model that uses both utterance-level contextualized representations and topic-level graph representations to evaluate the response.",
"As it is one of the SOTA dialogue evaluation models, we choose it as our dialogue selector.",
"Our framework consists of a forward generator GF that models the history-response-future dialogue flow, a backward generator GB that models the reversed dialogue flow, a dialogue selector S that ranks the sampled utterances conditioned on the dialogue context, and a gate g that dynamically balance the ensemble weights between GF and GB .",
"Given a history h , we first use GF to sequentially sample the response r and the future f with the help of S .",
"We then employ the ensemble of GF and GB using g to generate the response based on both h and f .",
"The firstly generated responses together with the future-aware second-pass responses are finally re-ranked by S to produce the final response.",
"Figure 2 illustrates our proposed framework.",
"Given a dialogue history h , we first use GF to generate n responses { r i } ni =1 using top-k sampling (Fan et al., 2018).",
"We denote these responses as the first-pass responses .",
"Then the selector S calculates the quality scores s r R n for all history-response pairs.",
"The quality scores naturally form a propability distribution p r R n over the sampled responses by using a softmax operation.",
"We here consider the response selection procedure as sampling from such a distribution.",
"Considering 964 !",
"that the responses in open-domain dialogue are often diverse and hard to evaluate by any automatic evaluation metric, only using the response with the highest quality score or probability to further simulate the future is suboptimal.",
"Meanwhile, generating futures conditioned on all the n sampled responses is too time-consuming.",
"Therefore, borrowing the idea from beam search (Sutskever et al., 2014) where the k -best sentence prefixes are maintained during decoding to balance the searching performance and speed, we propose to keep the k -best responses at hand while discarding the others.",
"For each of the selected response r i , we concatenate it with h and use GF to again sample n dialogue futures { f jr i } nj =1 , where f jr i denotes the j -th future simulated from h and r i .",
"Up to now, we obtain k n history-response-future dialogue triplets for the same dialogue history h by simulation.",
"We again resort to the selector S to calculate the quality scores of all the generated futures conditioned on h and their corresponding ancestral responses as { s f r 1 , . . . , s f ri , . . . , s f rk } .",
"We consider all the generated futures in the same sampling space (i.e., the future space of the given history) and directly perform softmax over the k n quality scores to get the future distribution.",
"Considering that the responses used to generate the futures are not equal in quality, we additionally multiply each probability of the simulated future f jr i with the probability of its ancestral response p r i to get the final ranking scores based on which we select k -best dialogue futures.",
"Now with the history h and k plausible dialogue futures at hand, we pursue to generate the second-pass response conditioned on both the history and",
"the future information.",
"Given that the simulated futures contain noise derived from error accumulation in the simulation phase, it is necessary to balance the weights between the history-conditioned GF and the future-conditioned GB when they collaboratively generate the response.",
"Hereby we introduce a trainable gate g which takes the last hidden states from GF and GB as inputs and calculates an ensemble weighting score w using an MLP with sigmoid activation.",
"We then generate the response r using the per-step weighted ensemble of GF and GB conditioned on h and f : P ( r t | h, f, r <t ; F , B , g ) = w P ( r t | h, r <t ; F ) + (1 w ) P ( r t | f, r <t ; B ) , (1) where the subscript t denotes the t -th token in r and F , B and g denote the parameters of GF , GB and g respectively.",
"Specifically, we sample n responses for the ensemble generation of h and each of the k futures, resulting in k n future-aware responses.",
"We denote these responses as the second-pass responses .",
"To make full use of the k -best first-pass responses, we finally re-rank the k + k n responses with S and consider the top-ranked response as our system outputs.",
"Recall that there are several components GF , GB , S , and g in our framework.",
"Although some of them can directly be used without post-training, this might be suboptimal.",
"For one thing, post-training the models on domain-specific data with the same objective often brings better performance (Guru-rangan et al., 2020).",
"For another, the original loss functions may not be thoroughly in accord with the ultimate goal in our framework.",
"Thereby we propose a customized joint training algorithm.",
"For GF and GB , we adopt a similar training objective used by Zhang et al. (2020).",
"Take GF for example, we consider every consecutive three utterances in a dialogue session as a history-response-future triplet and fine-tune the models by minimizing the negative log likelihood of the response and the future conditioned on the history.",
"GB is fine-tuned in a similar manner with the reversed inputs.",
"After fine-tuning, GF and GB are fixed.",
"For g , we can directly minimize the negative log-likelihood of the gold response r : L 1 ( g ) = (cid:88) t log P ( r t | h, f, r <t ; F , B , g ) , (2) where f is simulated from either the gold response (denoted as the teacher-forcing mode) or a sampled response (denoted as the free-running mode).",
"While for S , considering that the original objective used in Huang et al. (2020) is not customized for selecting better responses and futures, it is better to perform task-specific post-training.",
"Therefore, we propose to directly optimize S to our ultimate goal which is to maximize the log-likelihood of the gold response given the history and the selected simulated future.",
"Since the sampling operation is non-differentiable, we use REINFORCE (Williams, 1992) with a self-critic (Rennie et al., 2017) baseline to estimate the gradient.",
"We consider the future simulation process as sequential sampling from the score distributions of the responses and the futures respectively.",
"Given the n responses generated by GF conditioned on h , we sample a response r i from p r .",
"Then we generate n futures conditioned on h and r i using GF and again sample a future f jr i from them.",
"We feed this sampled future and the gold history into our ensemble generation model and calculate the log-likelihood of the gold response, which is the opposite number of Equation 2, as the reward R .",
"To reduce the variance of gradient estimation, we introduce a self-critic baseline.",
"Concretely, we sequentially select the response and the future with the highest scores in each sampling step and calculate the reward of using the greedy future as the baseline reward R b .",
"The gradients are then estimated as follows: SL 2 ( S ) ( R R b ) S [log P ( r i | h ; S ) + log P ( f jr i | r i , h ; S )] .",
"the futures generated from the sampled responses may contain much noise.",
"A better choice is to allow the model to gradually learn from easy to hard.",
"We create a curriculum schedule (Bengio et al., 2015) that gradually switches from the teacher-forcing mode to the free-running mode.",
"Specifically, let denote the proportion of teacher-forcing mode, we gradually decrease from to with cosine annealing schedule, where 0 < 1 .",
"For the overall training, we first train g using Equation 2 and set = .",
"Then we tune S using Equation 3 with the help of the above curriculum learning schedule.",
"Finally, we jointly tune g and S with a fixed = .",
"To verify the effectiveness of our proposed response generation framework, we experiment on two popular dialogue datasets, DailyDialog (Li et al., 2017) and PersonaChat (Zhang et al., 2018b).",
"We follow the original train/dev/test di-vision and reconstruct the datasets by treating each consecutive three utterances as a triplet that represents history-response-future, resulting in approximately 65k/6k/6k examples in DailyDialog and 114k/14k/13k examples in PersonaChat.",
"Posterior-GAN (Feng et al., 2020a) and RegDG (Feng et al., 2020b) are two non-GPT-based response generation models that use dialogue futures in the training time through either adversarial training or knowledge distillation.",
"DialoGPT F denotes the fine-tuned DialoGPT medium (Zhang et al., 2020) on two downstream datasets.",
"DialoGPT F,rerank is its enhanced version which is equipped with the dialogue evaluation model (i.e., GRADE (Huang et al., 2020)) to select the top-ranked response.",
"ProphetChat k =?",
"denotes the model with the same model parameters but different beam sizes when simulating the futures.",
"ProphetChat first and ProphetChat second are used to denote the settings where only the first-pass or the second-pass responses are used in the final re-ranking process.",
"ProphetChat w/o history means we utilize the top-ranked simulated future to generate the response without the help of the history.",
"ProphetChat w/o selector denotes the model where we sequentially sample the responses and the futures randomly without using the selector.",
"ProphetChat w/o train means we directly utilize the fine-tuned GF , GB and the fixed S without post-training.",
"We manually choose a fixed ensemble weight for the ensemble generation process instead of using a trainable gate.",
"ProphetChat w/ gold future denotes the model that utilizes the history and the gold future, which is inaccessible in the inference phase, to generate the response.",
"Our implementation is based on the open-source toolkit Transformers (Wolf et al., 2020).",
"For the generator GF and GB , we initialize them with the publicly released DialoGPT medium and DialoGPT-MMI medium 1 .",
"For the dialogue selector, we use the pre-trained GRADE 2 as initialization.",
"We firstly use AdamW (Loshchilov and Hutter, 2017) with learning rate 3e-5 to fine-tune GF and GB .",
"Then, we jointly train the ensemble gate g and the top non-transformer layers of the selector S with learning rate 2e-5, while keeping other parameters (i.e., GF , GB and most of the parameters of S except for its top layers) fixed.",
"We set the curriculum hyperparameters ( , ) as (0.0, 1.0) on both datasets.",
"We fix the sample number n of both the response and the future as 10 and vary the simulation beam size k { 1 , 2 , 3 , 5 } .",
"We choose k = 5 on DailyDialog and k = 3 on PersonaChat.",
"We use top-k sampling (Fan et al., 2018) to generate the first-pass responses, the futures, and the second-pass responses with the temperature as 0.7 and k as 40.",
"All the hyperparameters are chosen depending on 1 https://github.com/microsoft/DialoGPT 2 https://github.com/li3cmz/GRADE 967 Models Readability kappa Sensibleness kappa Specificity kappa Posterior-GAN 0.58 0.42 0.46 0.49 0.21 0.58 RegDG 0.60 0.45 0.51 0.59 0.27 0.50 DialoGPT F 0.68 0.52 0.64 0.61 0.44 0.52 DialoGPT F,rerank 0.69 0.50 0.69 0.60 0.45 0.64 ProphetChat 0.71 0.52 0.75 0.53 0.49 0.49 Models Readability kappa Sensibleness kappa Specificity kappa Posterior-GAN 0.64 0.58 0.50 0.65 0.24 0.48 RegDG 0.65 0.56 0.53 0.55 0.28 0.63 DialoGPT F 0.69 0.59 0.68 0.66 0.42 0.52 DialoGPT F,rerank 0.70 0.44 0.72 0.52 0.48 0.52 ProphetChat 0.72 0.43 0.77 0.61 0.53 0.54 Table 2: Human evaluation results on DailyDialog (the upper) and PersonaChat (the lower) datasets.",
"Automatic Metrics.",
"We use BLEU (Papineni et al., 2002) to measure the word overlap between the ground truth responses and the generated ones.",
"For simplification, we use B-n to denote the n-gram overlap scores.",
"We employ Distinct 1-4 (Li et al., 2016a) to measure the diversity of the generated responses, where Distinct-n (abbreviated as D-n ) represents the ratio of distinct n-grams in responses.",
"We adopt the embedding-based metrics (i.e., Average , Extrema , and Greedy ) (Liu et al., 2016) to measure the semantic relevance between the ground truth responses and the generated ones.",
"Human Evaluation.",
"We ask three well-educated annotators to score 150 randomly selected responses generated by ProphetChat and other baselines.",
"The annotators are asked to evaluate the human-likeness of the responses from three perspectives: readability, sensibleness and specificity.",
"For readability, we ask annotators whether the response is grammatically correct and easy to read.",
"For sensibleness and specificity, we follow Adiwar-dana et al. (2020) to conduct the evaluation.",
"For all three metrics, the annotators are asked to give 0-1 labels.",
"We provide the averaged scores and further calculate the Fleiss's kappa (Fleiss, 1971) to measure the inter-annotator agreement.",
"Table 1 presents the overall performance of our proposed method as well as its variants and ablations.",
"Table 2 shows the human evaluation results.",
"Compared with the two non-GPT baselines, GPT-based models generally achieve superior performance, especially in Distinct and human evaluation.",
"ProphetChat outperforms all the baseline methods by a large margin on both datasets in almost all automatic metrics and all human evaluation metrics.",
"For human evaluation results, the Fleiss's kappa scores are mainly distributed in [0.4, 0.6], which 968 means annotators achieved moderate agreement.",
"With the same model parameters, we have several model variants by using different hyperparameters or computation flow in the inference phase.",
"Here we mainly discuss two types of model variants: (1) the model with different simulation beam size k , (2) the final re-ranking among the first-pass responses or the second-pass responses.",
"The simulation beam size.",
"When simulating the dialogue future, we can choose different beam sizes to balance the computation cost and the performance.",
"We test k { 1 , 2 , 3 , 5 } on both datasets and find k = 5 is better than others on DailyDialog, while k = 3 is enough on PersonaChat.",
"It can be seen that when k is small, increasing k can boost the performance.",
"With the appropriate choice of the simulation beam size, ProphetChat can be deployed to various scenarios with different computation resources.",
"We further directly test the future simulation performance by comparing the futures generated by our method and generated using the gold responses.",
"The results are listed in Table 3.",
"It can be observed that on DailyDialog, with the increase of k , our future simulation method gradually catches up with the teacher forcing counterpart in BLEU and Distinct, while still lagging behind in embedding-based metrics.",
"On PersonaChat, our method even outperforms TF rerank in several metrics.",
"These findings proves that we are able to obtain dialogue futures of good quality solely based on the history through our effectiveness future simulation algorithm.",
"Re-ranking among the first pass or the second pass responses.",
"Recall that in our main framework, we finally gather the k first-pass responses and the k n second-pass responses together and finally re-rank them with the selector conditioned on both the history and the corresponding future.",
"From Table 1 we can find that on both datasets, re-ranking using both groups of responses yield better performance in most of the metrics than only using one of them.",
"When comparing their individual performance, it can be observed that on DailyDialog, ProphetChat second is superior to ProphetChat first in BLEU and Distinct, while ProphetChat first wins in embedding-based metrics.",
"On PersonaChat, ProphetChat second wins all metrics except BLEU.",
"There exist some cases where the simulated futures are meaningless or include irrelevant information.",
"When this happens, the final re-ranking process comes as the remedy.",
"We find that the proportions of the test cases where the final responses are picked from the second-pass responses are 40.4% on DailyDialog and 36.6% on PersonaChat, which are less than the proportions of the second-pass responses involved in re-ranking.",
"This finding indicates that re-ranking plays a vital role to select the appropriate responses from the two groups of candidate responses of various qualities.",
"We make ablation study from several perspectives including the effect of the history, the selector and the training algorithm.",
"Table 1 shows that although only using the simulated future (i.e., ProphetChat w/o history) can generate plausible responses, the performance is largely inferior to the full model.",
"Also, we observe that ProphetChat w/o selector underperforms the full model, demonstrating the effectiveness of the selector which helps simulate better futures.",
"When considering the training objective, we find that ProphetChat w/o train already achieves good performance, but jointly training the whole model further makes our model perform better.",
"Finally, when provided with the gold future, ProphetChat w/ gold future outperforms ProphetChat in terms of embedding-based metrics on both datasets, and BLEU on PersonaChat, while underperforming on other metrics.",
"In other words, with the simulated futures, ProphetChat can achieve comparable performance with the model that \"cheats\" to see the gold future, which also demonstrates the effectiveness of our method.",
"Figure 3 presents two cases sampled from the two datasets.",
"For ProphetChat, in addition to its final response, we provide its corresponding first-pass response, and the simulated future of the response.",
"From the two cases we can observe that by taking the simulated future into consideration, ProphetChat generates more informative responses than baselines.",
"Specifically, in case 1, when a two-choice query is issued in the history, the first-pass response chooses online\" as the answer and meanwhile poses another question.",
"ProphetChat then uses the history and this first-pass response to simulate the future where the other possible choice (i.e., the bookstore) is talked about.",
"Given the history and the simulated future, ProphetChat finally obtains its response which not only answers the query 969 Case 1 History: That is cool.",
"in the history but also incorporates the cues in the future.",
"This response becomes more informative than the previous one.",
"A similar phenomenon can also be found in case 2 where the final response is more comprehensive that connects the history and the future smoothly.",
"We propose a novel response generation framework that utilizes the simulated dialogue futures in the inference phase to enhance response generation.",
"To acquire the dialogue futures, we design an effective beam-search-like roll-out strategy using a history-to-response dialogue generation model and a dialogue selector.",
"To make use of the simulated future, we use the dynamic ensemble of the history-to-response and the future-to-response generation model.",
"Experiment results demonstrate the effectiveness of our proposed method on two popular datasets.",
"In the future, we plan to enable our future simulation method to simulate multiple turns of dialogue futures.",
"We would like to thank the anonymous reviewers for their constructive comments.",
"This work was supported by the National Key Research and Development Program of China (No. 2020AAA0106600), National Natural Science Foundation of China (NSFC Grant No. 62122089 and No. 61876196), and Beijing Outstanding Young Scientist Program (No. BJJWZYJH012019100020098).",
"This paper proposes a new dialogue generation framework that utilizes the simulated dialogue futures in the inference phase to enhance the generation of the response.",
"Generative approaches are widely used in a wide range of dialogue applications.",
"The proposed method improves the quality of the generated responses, which could be beneficial to research and real-world applications.",
"The research will not pose ethical issues.",
"This paper doesn't involve any data collection and release thus there are no privacy issues.",
"All the datasets used in this paper are publicly available and are widely adopted by researchers to test the performance of open-domain response generation models.",
"This paper conducts human evaluation to evaluate the quality of the generated responses.",
"Three part-time research assistants were recruited to do human evaluation with clearly demonstrated evaluation rules.",
"They worked with the pay 100 CNY/hour during their evaluation."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"objective",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain"
] |
[
"This paper focuses on generating multi-hop reasoning questions from the raw text in a low resource circumstance.",
"Such questions have to be syntactically valid and need to logically correlate with the answers by deducing over multiple relations on several sentences in the text.",
"Specifically, we first build a multi-hop generation model and guide it to satisfy the logical rationality by the reasoning chain extracted from a given text.",
"Since the labeled data is limited and insufficient for training, we propose to learn the model with the help of a large scale of unlabeled data that is much easier to obtain.",
"Such data contains rich expressive forms of the questions with structural patterns on syntax and semantics.",
"These patterns can be estimated by the neural hidden semi-Markov model using latent variables.",
"With latent patterns as a prior, we can regularize the generation model and produce the optimal results.",
"Experimental results on the HotpotQA data set demonstrate the effectiveness of our model.",
"Moreover, we apply the generated results to the task of machine reading comprehension and achieve significant performance improvements.",
"Question generation (QG) is a hot research topic that aims to create valid and fluent questions corresponding to the answers by fully understanding the semantics on a given text.",
"QG is widely used in many practical scenarios: including providing practice exercises from course materials for educational purposes (Lindberg et al., 2013), initiating the dialog system by asking questions (Mostafazadeh et al., 2017), and reducing the labor cost of creating large-scale labeled samples for the QA task (Duan et al., 2017).",
"The mainstream QG methods can be summarized into the rule-based and neural-based models.",
"The first method often transforms the input Corresponding author.",
"text into an intermediate symbolic representation, such as a parsing tree, and then convert the resulting form into a question by well-designed templates or general rules (Hussein et al., 2014).",
"Since rules and templates are hand-crafted, the scalability and generalization of this method are limited.",
"Respectively, the neural model usually directly maps the text to question based on neural network (Du and Cardie, 2017), which is entirely data-driven with far less labor.",
"Such a model can be typically regarded as learning a one-to-one mapping between the text and question.",
"The mapping is mainly used to generate simple questions with a single sentence.",
"However, due to the lack of fine-grained modeling on the evidential relations on the text, such a method has minimal capability to form the multihop questions that require sophisticated reasoning skills.",
"These questions have to be grammatically valid.",
"Besides, they need to logically correlate with the answers by deducing over multiple entities and relations in several sentences and paragraphs of the given text.",
"As shown in",
"Fig.(1), the question asks the director of a film, where the film was shot at the Quality Cafe in Los Angeles and Todd Phillips directed it.",
"These two relations can form a reasoning chain from question to answer by logically integrating the pieces of evidence Los Angeles , Quality Cafe , and Old School as well as the pronoun it distributed across S 1 in paragraph 1 and S 1 , S 2 in paragraph 2.",
"Without capturing such a chain, it is difficult to precisely produce the multi-hop question by using Old School as a bridging evidence and marginal entity Todd Phillips as the answer.",
"For the task of multi-hop QG, a straightforward solution is to extract a reasoning chain from the input text.",
"Under the guidance of the reasoning chain, we learn a neural QG model to make the result satisfy the logical correspondence with the answer.",
"However, the neural model is data-hungry, and the scale of training data mostly limits its performance.",
"Each training example is a triple combined with the text, answer, and question.",
"Since labeling such a combination is labor-intensive, it is difficult to ensure that we can always obtain sufficient training data in real-world applications.",
"We thus formalize the problem as the low-resource generation of multi-hop questions, which is less explored by existing work.",
"This task has substantial research value since reasoning is crucial in quantifying the high-level cognitive ability of machines, and low resource is the key to promote the extensive application.",
"In order to address the problem, we propose to utilize unlabeled data, which is usually abundant and much easier to obtain.",
"Although such data does not combine the questions with the texts and answers, the unlabeled questions contain plentiful expressive forms with structural patterns on the syntax and semantics.",
"These patterns can be seen as the template to produce the questions.",
"Thus, we can use the patterns as the prior to regularize the QG model and obtain better results accordingly.",
"Motivated by the above observations, we propose a practical two-stage approach to learn a multihop QG model from both a small-scale labeled data and a large-size unlabeled corpus.",
"In particular, we first exploit the neural hidden semi-Markov model (Dai et al., 2016) to parameterize the sophisticated structural patterns on the questions by latent variables.",
"Without domain knowledge, the variables can be estimated by maximizing the likelihood of the unlabeled data.",
"We then heuristically extract a reasoning chain from the given text and build a holistic QG model to generate a multi-hop question.",
"The evidential relations in the reasoning chain are leveraged to guide the QG model, so as to let the generated result meet multi-hop logical correspondence with the answer.",
"Simultaneously, we naturally incorporate the prior patterns into the QG model.",
"In this way, we can regularize the model and inform it to express a question reasonably.",
"That can improve the syntactic and semantic correctness of the result.",
"With the parameterized patterns, the whole model can be learned from the labeled and unlabeled data in an end-to-end and explainable manner.",
"In order to better balance the supervision of the labeled data and the usage of prior patterns, we propose to optimize the model by reinforcement learning with an augmented evaluated loss.",
"Experiments are conducted on the HotpotQA (Yang et al., 2018) data set, which contains a large number of reasoning samples with manual annotation.",
"Evaluated results in terms of automatic metrics and human judgment show the effectiveness of our approach.",
"Moreover, we apply our generated results to the task of machine reading comprehension.",
"We view the results as pseudo-labeled samples to enrich the training data for the task.",
"That can alleviate the labeled data shortage problem and boost the performance accordingly.",
"Extensive experiments are performed to show the efficacy of our approach in this application with the help of low-resource QG.",
"The main contributions of this paper include, We dedicate to the topic of low-resource generation of multi-hop questions from the text.",
"We propose a practical approach to generate multi-hop questions with a minimal amount of labeled data.",
"The logical rationality of the results is guided by the reasoning chain extracted from the text.",
"Besides, the results are regularized to ensure the correctness of syntax and semantics by using the prior patterns estimated from a large-size of unlabeled data.",
"We show the potential of our approach in a real-world application on machine reading comprehension by using the generated results.",
"The rest of this paper is organized as follows.",
"Section 2 elaborates on the proposed low-resource QG framework.",
"Section 3 presents experimental results, while Section 4 shows the QG application and demonstrates its usefulness.",
"Section 5 reviews related works and Section 6 concludes this paper.",
"In this section, we first define some notations and then present the details of the proposed QG framework, including the learning of prior patterns from the unlabeled data, and the multi-hop QG network guided by the reasoning chain and prior patterns.",
"Let DL = { ( B i , A i , Y i ) } ni =1 denote a small set of labeled data that consists of n examples on the text B , answer A , and question Y .",
"Besides, we assume that there are a large number of unlabeled data DU = { Q j } N j =1 available, where Q j DU shares the same characteristics with Y i DL and N > n .",
"Each text contains multiple paragraphs and sentences, involving several logically correlated entities.",
"We aim to generate the new question Y (cid:48) and answer A (cid:48) given the evaluated text B (cid:48) by a QG model, where the answer A (cid:48) often is a salient entity in the text B (cid:48) .",
"The question Y (cid:48) is produced by find-ing the best Y to maximize the conditional probability in Y = arg max Y (cid:48) (cid:81) Tt =1 p ( y t | B (cid:48) , A (cid:48) , Y (cid:48) <t ) , where Y (cid:48) <t represents the outputted 1 th to ( t 1) th terms, y t is the t th term.",
"The question has to be syntactically and semantically correct.",
"Also, it needs to correspond to the answer by logically deducing over multiple evidential entities and relations scattered across the text.",
"Since the resource in DL may not be enough to support accurately learning of the p ( ) , we transfer the linguistic knowledge in DU and combine it with DL to enhance the training.",
"The expressive pattern on the question can be viewed as a sequence of groups.",
"Each group contains a set of term segments that are semantically and functionally similar.",
"Such segmentation is not explicitly given but can be inferred from the text's semantics.",
"It is difficult to characterize this structural pattern by simple hand-crafted rules, while we do not have extra labeled data to learn the pattern by the methods like Variational Auto-Encoder (VAE) (Kingma and Welling, 2014).",
"In order to tackle this problem, we propose to employ the neural hidden semi-Markov model.",
"The model parameterizes the similar segments on the input questions by probabilistic latent variables.",
"Through unsupervised learning, these variables can be trained on the unlabeled data.",
"That can well represent the intricate structural patterns without domain knowledge.",
"Besides, the variables can be incorporated into the generation model naturally, which makes the results more interpretable and controllable.",
"Given a question Q with a sequence of terms { q t } Tt =1 , we model its segmentation by two variables, including a deterministic state variable z t { 1 , , K } that indicates the segment to which the t th term belongs, and a length variable l t { 1 , , L } , which specifies the length of the current segment.",
"We assume the question is generated based on a joint distribution as",
"Eq.(1) by multi-step emissions, where i( ) is the index function; the index on t th term is i( t ) = (cid:80) tj =1 l j , with i(0) = 0 and i( T (cid:48) ) = T ; q i( t 1)+1:i( t ) is the sequence of terms ( q i( t 1)+1 , , q i( t ) ) .",
"That is, we first produce a segment based on the latent state z t , and then emits term with a length of l t on that segment.",
"p ( z t +1 , l t +1 | z t , l t ) is the transition distribution, where the ( t + 1) th latent state and length are conditioned on their previous ones.",
"Since the length mainly depends on the segment, we can further factorize the distribution as p ( l t +1 | z t +1 ) p ( z t +1 | z t ) .",
"p ( l t +1 | z t +1 ) is the length distribution, and we fix it to be uniform up to a maximum length L .",
"In this way, the model can be encouraged to bring together the functionally similar emissions of different lengths.",
"p ( z t +1 | z t ) is the state distribution, which can be viewed as a K K matrix, where each row sums to 1 .",
"We define this matrix to be",
"Eq.(2), where e o , e j , e k R d are the embeddings of the state o , j , k respectively, and b o,j , b o,k are the scalar bias terms.",
"Since the adjacent states play different syntactic or semantic roles in the expressive patterns, we set b o,o as negative infinity to disable self-transition.",
"We apply a row-wise softmax to the resulting matrix to obtain the desired probabilities.",
"p ( q i( t 1)+1:i( t ) | z t , l t ) is the term emission distribution conditioned on a latent state and a length.",
"Based on the Markov process, the distribution can be written as a product over the probabilities of all the question terms, as",
"Eq.(3).",
"In order to compute the term probability, we leverage a neural decoder like the Gated Recurrent Unit (GRU) (Cho et al., 2014).",
"We first formulate the hidden vector h jt for yielding j th term as h jt = GRU ( h j 1 t , [ e z t ; e q i( t 1)+ j 1 ]) , where [ ; ] is a concatenation operator, e q i( t 1)+ j 1 and e z t are the embedding of the term and corresponding segment, respectively.",
"By attending over the given question using h jt , we can produce a context vector v jt , as g z t (cid:12) h jt , where (cid:12) refers to the element-wise multiplication, g z t is a gate for the latent state z t , and there are K gate vectors as trainable parameters.",
"We then pass the vector v jt through a softmax layer to obtain the desired distribution as p ( q i( t 1)+ j | q i( t 1)+ j 1 , z t , l t ) = softmax (W q v jt + b q ) , where W q and b q are the trainable parameters.",
"Considering that the latent variables are unobserved, we then learn the model by marginalizing over these variables to maximize the log marginal-likelihood of the observed question sequence Q , i.e., max ( logp ( Q )) .",
"p ( Q ) can be formulated as",
"Eq.(4) by the backward algorithm (Murphy, 2002), with the base cases T ( o ) = 1 , o { 1 , , K } .",
"The quantities in",
"Eq.(4) are obtained from a dynamic program, which is differentiable.",
"Thus, we can estimate the model's parameters from the unlabeled data DU by back-propagation.",
"t ( o ) = p ( q t +1: T | z t = o ) = (cid:80) Kk =1 t ( k ) p ( z t +1 = k | z t = o ) t ( k ) = p ( q t +1: T | z t +1 = k ) = (cid:80) Lj =1 [ t + j ( k ) p ( l t +1 = j | z t +1 = k ) p ( q t +1: t + j | z t +1 = k, l t +1 = j )] p ( Q ) = (cid:80) Kk =1 0 ( k ) p ( z 1 = k ) (4) 2.3 Multi-hop QG Net with Regularization Afterward, we incorporate the learned patterns into the generation model as the prior.",
"Such prior can be acted as a soft template to regularize the model.",
"That can ensure the correctness of the results in syntax and semantics, especially when the labeled data is insufficient to learn the correspondence between the text and question.",
"Fig.(2) illustrates the architecture of our model.",
"We first estimate the prior pattern by sampling a sequence of latent states z with the length l .",
"We then extract the reasoning chain and other textual contents involved in asking and solving a specific question from the given text.",
"Under the guidance of both the reasoning chain and the prior patterns, we build a multi-hop QG model on the extracted contents by the sequence-to-sequence framework (Bahdanau et al., 2015).",
"The evidential relations in the chain are used to enhance the logical rationality of the results.",
"The prior pattern helps to facilitate the performance in low-resource conditions by specifying the segmentation of the generated results.",
"Using the Viterbi algorithm (Zucchini et al., 2016), we can obtain the typed segmentation of a given question.",
"Such segmentation can be characterized by a sequence of latent states z .",
"Each segment, like the phrase, is associated with a state, reflecting that the state frequently produces that segment.",
"Based on the labeled data DL , we can collect all sequences of latent states, which can be seen as a pool of prior patterns.",
"We sample one from the pool uniformly.",
"And then, we view it as a question template with S distinct segments, as { < z kt , l kt > } Sk =1 , where z t is a state variable for the t th term, l t is the length variable derived by the probability p ( l t | z t ) , z kt and l kt are obtained by collapsing adjacent z t and l k with the same value.",
"In order to easily incorporate into the generation model, we encode the template as a vector v mk = g z t (cid:12) h mk , where h mk is the hidden vector for generating m th term, as GRU ( h mk 1 , [ e z m ; e y t 1 ]) , m satisfies i( m 1) < t i( m ) , k = t i( m 1) .",
"Given a text, we use the method proposed by Yu et al. (2020) to extract the question-related content.",
"In order to make the paper self-contained, we briefly describe the approach in this section.",
"It first extracted the entities from the text, and view them as the potential answers and evidences.",
"It then links the entities to create a graph by three kinds of relations, including dependency, coreference, and synonym.",
"Based on the graph, it heuristically extracts a sub-graph as the reasoning chain.",
"The textual contents on the sub-graph are then gathered, including the answer, reasoning type, evidential entities, and sentences on the entities.",
"The extraction is based on three question types, consisting of the Sequence , Intersection , and Comparison .",
"These types account for a large proportion of the multihop questions on most typical data sets, for example, 92% in HotpotQA data set (Min et al., 2019).",
"We then develop a multi-hop QG model based on the extracted contents.",
"This model is guided by the reasoning chain and prior pattern, so that the generated results are not only logical but also fluent.",
"In the pre-processing phase, we first mask the answer from the input contents by a special token < UNK > , to avoid the answer inclusion problem (Sun et al., 2018).",
"That is, the answer words may appear in the question that would reduce the rationality.",
"Encoder : The reasoning chain is encoded via an N head graph transformer (Vaswani et al., 2017), so as to integrate all evidential relations fully.",
"Each node is represented by contextualizing on its neighbors, as h gv = e v + (cid:107) Nn =1 (cid:80) j N v a n ( e v , e j )W n e j , where (cid:107) denotes the concatenation, e v is the embedding of node's entity, a n ( , ) is n th head attention function, N v is the set of neighbors.",
"By aggregation with N -head attention, we can get a relation-aware vector c g as",
"Eq.(5), where W ng , W h , W d are trainable matrices, C is the set of nodes in the chain.",
"a n ( s t , h gv ) = exp ((W h h gv ) TW d s t ) (cid:80) k N v exp ((W h h gk ) TW d s t ) c g = s t + (cid:107) Nn =1 (cid:80) v C a n ( s t , h gv )W ng h gv (5) Other textual inputs are encoded in two steps: (1) each text term is embedded by looking up the pre-trained vectors, such as BERT (Devlin et al., 2019).",
"(2) The resulting embeddings are fed into a bi-directional GRU to incorporate a sequential context.",
"In detail, the sentences are represented by concatenating the final hidden states of GRU, as [ h b 1 ; h bJ ] , where j th term is h bj = [ h bj ; h bj ] , h bj = GRU ( e bj , h bj +1 ) , h bj = GRU ( e bj , h bj 1 ) ; [ ; ] denotes the concatenation of two vectors; e bj is the augmented embedding of j th term; J is the size of all terms.",
"Similarly, the answer and evidence entities are integrally encoded as h a = [ h a 1 ; h aO ] .",
"Attention : For the textual inputs, we fully integrate the encodings and their correlations by attention.",
"First, we use self-attention (Wang et al., 2017) to grasp the long-term dependency in the sentences, as [ h bj ] Jj =1 = SelfAttn ([ h bj ] Jj =1 ) .",
"Subsequently, we exploit multi-perspective fusion (Song et al., 2018) to grasp the answer-related context in the sentences and strengthen their cross interactions.",
"That is, [ h b (cid:48) j ] Jj =1 = MulP erF use ([ h bj ] Jj =1 , [ h ao ] Oo =1 ) .",
"By aggregating the significant information over all the terms, we can obtain a context vector c t as",
"Eq.(6), where tj is the normalized attention weight, a tj denotes the alignment score, s t refers to the t th hidden state of the decoder, v, b, W s , W b are trainable parameters.",
"a tk = v T tanh (W s s t + W b h b (cid:48) k + b ) tj = exp( a tj ) / (cid:80) Jk =1 exp( a tk ) c t = (cid:80) Jj =1 tj h b (cid:48) j (6) Decoder : Based on the context vector c t , we exploit another GRU as the decoder.",
"Each question term is yielded by the distribution in",
"Eq.(7), where is a 1 -dim embedding of the reasoning type, W o and b o are trainable parameters.",
"We use a copy mechanism (Gu et al., 2016) to tackle unknown words problem, where p copy ( ) denotes the copy distribution.",
"In order to let the questions logically correlate with answers, we guide the decoder by the vector c g , which encodes the reasoning chain.",
"Accordingly, we regularize the model to adaptively fit the prior pattern represented by the vector v mk .",
"That can improve the generated quality when the labeled data is insufficient.",
"p voc ( y t ) = Softmax (W o [ s t ; c t ; c g ; ] + b o ) p copy = (cid:80) Jj =1 tj 1 { y == w j } p g = Sigmoid ( c t , s t , y t 1 ) s t = GRU ( s t 1 , v mk ) p ( y t ) = p g p voc ( y t ) + (1 p g ) p copy ( y t ) (7) 2.3.4 Learning with Limited Labeled Data A straightforward solution to train the above QG model is the supervised learning.",
"It minimizes the cross-entropy loss at each generated term by referring to the ground-truth in the labeled data DL , as L sl = 1 n (cid:80) i DL (cid:80) T i t =1 log p ( y it | Y i ; <t , A i , B i ) .",
"However, since DL only contains a few samples, we would not have enough supervision from DL to get the best results.",
"While we leverage the unlabeled data DU to facilitate the training, it is difficult to subtly balance the supervised signal from DL and the prior pattern learned from DU .",
"In order to address the problem, we resort to reinforcement learning.",
"It can globally measure the overall quality of the results by minimizing the loss L rl = EY s [ r ( Y s )] , where Y s is a sampled result, Y is the ground-truth, is the parameters of the QG model, and is the generation policy of the model.",
"r ( ) is a function to evaluate the generated quality.",
"It is the weighted sum of three rewards, including",
"(a) Fluency : we calculate the negative perplexity (Zhang and Lapata, 2017) of Y s by a BERT-based language model p LM , that is, 2 1 T (cid:80) Tt =1 log 2 p LM ( y t | Y s<t ) ;",
"(b) Answerability : we use a metric QBLEU 4 ( Y s , Y ) (Ne-ma and Khapra, 2018) to measure the matching degree of Y s and Y by weighting on several answer-related factors, including question type, content words, function words, and named entities;",
"(c) Semantics : we employ word movers distance (WMD) (Gong et al., 2019) to measure the predicted result Y s , which has different expressive forms but same semantics with gold Y , as W MD ( Y s , Y ) /Length ( Y ) , where Length ( ) is the length function used as the normalization factor.",
"By considering the metrics are non-differentiable, we exploit the policy gradient method (Li et al., 2017) for optimization.",
"In order to enhance readability, we train the model by a mixed loss, as L = L rl + (1 ) L sl , where is a trade-off factor.",
"We extensively evaluate the effectiveness of our approach, including the comparisons with state-of-the-art and the application on a task of MRC-QA.",
"The evaluations were performed on three typical data sets, including HotpotQA (Yang et al., 2018), ComplexWebQuestions (Talmor and Berant, 2018), and DROP (Dua et al., 2019).",
"These data sets were collected by crowd-sourcing, consisting of 97k, 35k, and 97k examples, respectively.",
"The HotpotQA data set contained a large proportion of labeled examples.",
"Each comprised of the question, answer, and text with several sentences.",
"Therefore, the HotpotQA data set was suitable to evaluate the multi-hop QG task.",
"The other two data sets contained abundant reasoning questions, but they are not associated with the text and answer.",
"We thus viewed them as the unlabeled data.",
"In order to simulate the low-resource setting, we randomly sampled 10% of the HotpotQA train set to learn the models, and evaluated them on the test set with a size of 7k.",
"We verified the generated quality for each evaluated method by comparing the matching degree between the result and gold-standard.",
"We adopted three standard evaluation metrics in the QG task, including BLEU-4 (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), and ROUGE-L (Lin, 2004).",
"Furthermore, we carried out human evaluations to analyze the generated results.",
"To avoid biases, we randomly sampled 100 cases from the test set and generated questions for each test case by all the evaluated methods.",
"We then invited eight students to give the binary rating on each question independently.",
"The rating was in terms of three metrics, including valid syntax , relevance to input textual sentences, and logical rationality to the answer.",
"We averaged the cumulative scores of the 100 binary judgments as the performances corresponding to the evaluated methods.",
"The resultant scores were between 0 100, where 0 is the worst, and 100 is the best.",
"We used Randolph's free-marginal kappa (Randolph, 2005) to measure the agreements among the raters.",
"Model configurations were set as follows.",
"We leveraged 768-dimension pre-trained vectors from the uncased BERT to embed words.",
"The number of states K and emissions L in the semi-Markov model was set to 50, 4, respectively.",
"The size of hidden units in both encoder and decoder was 300.",
"The recurrent weights were initialized by a uniform distribution between 0.01 and 0.01 and updated with stochastic gradient descent.",
"We used Adam (King-ma and Ba, 2015) as the optimizer with a learning rate of 10 3 .",
"The trade-off parameter was set to 0.4.",
"For pattern learning, we parsed every question by the Stanford CoreNLP toolkit (Manning et al., 2014).",
"We then learn better segmentation by forcing the model not to break syntactic elements like the VP and NP.",
"To reduce the bias, we carried out five runs and reported the average performance.",
"We compared our approach against five typical and open-source methods.",
"These methods were based on the sequence-to-sequence framework with attention.",
"According to the different techniques used, we summarized them as follows.",
"(a) the basic model with the copy mechanism, i.e., NQG++ (Zhou et al., 2017);",
"(b) ASs2s (Kim et al., 2019), which encoded the answer separately to form answer-focused questions;",
"(c) CorefNQG (Du and Cardie, 2018) that incorporated linguistic features to represent the inputs better;",
"(d) MaxPointer (Zhao et al., 2018) using gated self-attention to form questions for long text inputs;",
"(e) MPQG+R (Song et al., 2018) that captured a broader context in the text to produce the context-dependent results.",
"In order to understand the effect of unlabeled data, we examined two variants of the proposed model.",
"That is, Ours-Pattn which was trained without unlabeled data, and Ours-50% that used 50% unlabeled data for training.",
"Moreover, we performed empirical ablation studies to gain better insight into the relative contributions of various components in our model, including Ours-Chain that discarded the guidance of the reasoning chain vector and Ours-Reinf that replaced the reinforcement learning with a simple supervised learning.",
"As reported in",
"Tab.(1), our approach achieved the best performance.",
"We significantly outperformed the best baseline (i.e., MaxPointer ) by over 11.6%, 10.5%, 11.4% in terms of BLEU-4 , METEOR , and ROUGE-L , respectively.",
"From the comparisons among Ours-Pattn , Ours-50% , and Ours , we found that the performance improves with more unlabeled data.",
"Although we lack an appropriate comparative model based on the unlabeled data, these results can still indicate the effectiveness of our model.",
"With only limited labeled data, our model can effectively leverage unlabeled data to guide the generation.",
"Besides, the ablation on all evaluated components led to a significant performance drop.",
"We may infer that the reasoning chain is crucial for multi-hop QG on the guidance of logical correlations.",
"Also, the reinforcement learning can globally optimize the model by balancing the prior patterns and labeled supervision.",
"Tab.(2) illustrated the results of human evaluation.",
"The average kappa were all above 0.6, which indicated substantial agreement among the raters.",
"Consistent with quantitatively analyzed results in Section 3.2, our model significantly outperformed all baselines in terms of three metrics, where the improvement on the rationality metric was the largest.",
"That showed the satisfied quality of our generated results, especially in terms of multi-hop ability.",
"We investigated the value of unlabeled data for the overall performance, especially when the labeled data was inadequate.",
"In particular, we randomly sampled { 10% , 40% , 70% , 100% } of the labeled data, and split the unlabeled data into ten subsets.",
"For each scale on the labeled data, we incrementally added by one subset of unlabeled data to learn the QG model.",
"We used the same training protocol and reported the overall performance on the test set.",
"As shown in",
"Fig.(3), even a small amount of unlabeled data can play a decisive role in improving the performance in terms of three metrics.",
"The ratio of improvement was higher when the scale of the labeled data was small.",
"The results further verified the usefulness of unlabeled data on learning the QG model with a low labeled resource.",
"In order to examine the gains of our training approach with the mixed loss objective, we tuned the trade-off parameter (i.e., ) from [0 , 1] with 0.1 as an interval.",
"The performance change curve was displayed in",
"Fig.(4).",
"The best performance was obtained at = 0 .",
"4 .",
"The performance dropped dramatically when was close to 0 or 1.",
"We would infer that both objectives could help to measure the quality of the outputted results better, and thus train the model efficiently.",
"The task of machine reading comprehension (MRC-QA) aims to answer given questions by understanding the semantics of the text.",
"The mainstream methods are based on the neural network.",
"These methods often need a lot of labeled data for training, but the data is expensive to obtain.",
"Thus, we are inspired to apply our generated results to enrich the training set for the task of MRC-QA.",
"Fig.(5) demonstrates the architecture of this application.",
"Given a case from a small-size labeled set, we first extracted the contents correlated to a specific question from the case's text, including the reasoning chain, reasoning type, answer, evidential entities, and sentences on the entities.",
"We then learned our QG model based on the contents and generated questions as pseudo data to augment the labeled set.",
"For each evaluated case, we could yield approximately 5 8 pseudo samples consisted of the text, question, and answer.",
"Later, we trained an MRC-QA model on the augmented labeled set and reported the performance on the test set.",
"By referring to the leaderboard on the HotpotQA website, we Figure 5: Apply multi-hop QG to support MRC-QA.",
"chose an open-source model for MRC-QA, named Dynamically Fused Graph Network (DFGN) (Qiu et al., 2019), which achieved the state-of-the-art at the paper submission time.",
"Considering that the size of the training set impacted the model's performance, we ran the entire pipeline with different proportions of the labeled data, so as to verify the proposed model thoroughly.",
"Two evaluation metrics were employed, including exact match ( EM ) and F1 .",
"We examined the tasks of answer span extraction, supporting sentence prediction, and the joint task in the distractor setting.",
"Tab.(3) showed that our QG+QA model trained on 30% labeled data obtained competitive performance against the QA model learned on the 100% labeled data.",
"When using more labeled data, the performance advantages of our QG+QA model continued to grow.",
"Such results showed that our QG model could enlarge the coverage and diversity of the MRC-QA training set given limited labeled data.",
"That could help to learn the state-of-the-art.",
"Moreover, we conducted case studies to understand the generating behavior vividly.",
"As exhibited in",
"Tab.(4), our QG model could generate massive questions on multi-hop reasoning.",
"Contrastively, the gold standard often contained one sample since it was labor-intensive to enumerate all the cases.",
"Passage : ... ( S 1 ) 'The Hard Easy' is the episode written by Thomas Herpich.",
"( S 2 )",
"He was born in October, 1979 in Torrington, Connecticut, American, along with his twin brother Peter who was a painter and artist.",
"( S 3 )",
"Thomas is best known for being a storyboard artist on the animated television series 'Adventure Time'.",
"...",
"Answer : Peter 5 Related Works Existing models for the QG task include rule-based and neural-based methods.",
"Since the rules are handcrafted, the first method is of low scalability (Chal-i and Hasan, 2015).",
"The researcher turns to the neural model.",
"It can directly map the inputs into questions by using an attention-based sequence-to-sequence framework, which is entirely data-driven with far less labor.",
"Various techniques have been applied to this framework, including answer separately encoding, using linguistic features, capturing border context, reinforcement learning, and emphasizing on question-worthy contents (Pan et al., 2019).",
"These methods are mainly used to generate simple questions with a single sentence (Yu et al., 2019).",
"They are challenging to generate the reasoning questions accurately due to the lack of fine-grained modeling on the evidential relations in the text.",
"In order to address the problem, Yu et al. (2020) proposed to incorporate a reasoning chain into the sequential framework, so as to guide the generation finely.",
"All the methods are built of the assumption that sufficient labeled data is available.",
"However, labeled data is quite scarce in many real-world applications (Yang et al., 2019).",
"The low-resource problem has been studied in the tasks such as machine translation (Gu et al., 2018), pos tagging (Kann et al., 2018), word embedding (Jiang et al., 2018), text generation (Wiseman et al., 2018), and dialogue systems (Mi et al., 2019).",
"To the best of our knowledge, the low-resource multi-hop QG is untouched by existing work.",
"We thus focus on this topic and propose a method to fulfill the gap.",
"We have proposed an approach to generate the questions required multi-hop reasoning in low-resource conditions.",
"We first built a multi-hop QG model and guided it to satisfy the logical rationality by the reasoning chain extracted from a given text.",
"In order to tackle the labeled data shortage problem, we learned the structural patterns from the unlabeled data by the hidden semi-Markov model.",
"With the patterns as a prior, we transferred this fundamental knowledge into the generation model to produce the optimal results.",
"Experimental results on the HotpotQA data set demonstrated the effectiveness of our approach.",
"Moreover, we explored the generated results to facilitate the real-world application of machine reading comprehension.",
"We will investigate the robustness and scalability of the model.",
"This work is supported by the Key-Area Research and Development Program of Guangdong Province (2018B010107005, 2019B010120001), National Natural Science Foundation of China (61906217,61806223,61902439,U1711262,U161 1264,U1711261,U1811261,U1811264,U1911203), National Key R&D Program of China (2018YFB1 004404), Guangdong Basic & Applied Research Foundation (2019B1515130001), Fundamental Re search Funds for Central Universities (19lgpy219)."
] | [
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"objective",
"objective",
"objective",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"objective",
"objective",
"objective",
"other"
] |
[
"Do self-supervised speech models develop human-like perception biases?",
"Abstract Self-supervised models for speech processing form representational spaces without using any external labels.",
"Increasingly, they appear to be a feasible way of at least partially eliminating costly manual annotations, a problem of particular concern for low-resource languages.",
"But what kind of representational spaces do these models construct?",
"Human perception specializes to the sounds of listeners' native languages.",
"Does the same thing happen in self-supervised models?",
"We examine the representational spaces of three kinds of state-of-the-art self-supervised models: wav2vec 2.0, HuBERT and contrastive predictive coding (CPC), and compare them with the perceptual spaces of French-speaking and English-speaking human listeners, both globally and taking account of the behavioural differences between the two language groups.",
"We show that the CPC model shows a small native language effect, but that wav2vec 2.0 and HuBERT seem to develop a universal speech perception space which is not language specific.",
"A comparison against the predictions of supervised phone recognisers suggests that all three self-supervised models capture relatively fine-grained perceptual phenomena, while supervised models are better at capturing coarser, phone-level, effects of listeners' native language, on perception.",
"Recent advances in speech recognition and representation learning show that self-supervised pretraining is an excellent way of improving performance while reducing the amount of labelled data needed for training.",
"For example, for the LibriSpeech dataset (Panayotov et al., 2015), the current best word error rates (Xu et al., 2021; Zhang et al., 2020) are obtained by systems based on the self-supervised wav2vec 2.0 model (Baevski et al., 2020).",
"Systems using self-supervised pre-training, both using wav2vec 2.0 and using HuBERT (Hsu et al., 2021a,b), show excellent word error rates after having been fine-tuned on only ten minutes of labelled data.",
"What is the effect of this self-supervised pretraining?",
"What type of representational spaces are learned by these models?",
"Lakhotia et al. (2021) compared wav2vec 2.0, HuBERT, and contrastive predictive coding (CPC: Oord et al. 2017; Rivire and Dupoux 2021) using an ABX discriminability metric (Schatz, 2016), demonstrating that all three models preserve and enhance linguistically relevant speech sound contrasts in the language they are trained on.",
"We build on this work, asking how these representational spaces compare to the perceptual spaces of human listeners, as inferred from behaviour on phone discrimination experiments.",
"Human listeners develop speech perception biases under the influence of their native languages.",
"For example, Japanese native speakers tend to confuse the English sounds /r/ and /l/ (Yamada and Tohkura, 1990) ( r ight and l ight in English will be perceived as the same or very similar), and English native speakers struggle with the French contrast /y/-/u/ (Levy, 2009), having difficulty perceiving the difference between words such as r u e (/y/: street) and r ou e (/u/: wheel).",
"These mispercep-tions start to show early on in the native language acquisition process: infants older than 6 months exhibit a facilitating effect at discriminating sounds from their native language, but a decline at doing so for some non-native sounds (Kuhl et al., 2006).",
"As the importance of this improvement for native sounds and this decline for non-native sounds seems to have a positive impact on infants' future language ability (Tsao et al., 2004; Kuhl et al., 2005), having a perceptual space with native language biases is probably essential to perceive and understand correctly native speech in all situations (with environmental noises, speaker change, etc).",
"If our goal is to have speech models that are as resilient and as adaptable as humans, it is thus 7591 interesting to see if they present the same native language specific biases.",
"By measuring human listeners' ability to discriminate a variety of familiar and unfamiliar speech sounds, we can create a detailed profile of listeners' perceptual biases in the form of a set of sounds' dis-criminabilities.",
"We then ask whether the training language influences self-supervised speech models in the same way that human listeners' native languages do.",
"In order to study speech models' perception biases and compare them with humans', we use the Perceptimatic benchmark datasets, 1 a collection of experimental speech perception data intended to facilitate comparison with machine representations of speech.",
"As of this writing, Perceptimatic contains Frenchand English-speaking participants' behaviour on discrimination tasks for phones in six different languages, for a total of 662 phone contrasts, along with the sound stimuli used during the experiments.",
"As in Lakhotia et al. (2021), we test state-of-the-art self-supervised models: wav2vec 2.0 (Baevski et al., 2020), HuBERT (Hsu et al., 2021a,b) and a CPC model (Rivire and Dupoux, 2021).",
"We train these models on English and French speech recordings (the native languages of the participants in Perceptimatic).",
"We compare the performance of these self-supervised models with a supervised ASR model, DeepSpeech (Amodei et al., 2016), trained on the same data but using phonemic labels.",
"To study the degree to which the models' representational space is impacted by properties of speech per se, we also train the same models on recordings of acoustic scenes not including human vocalisations (environmental noises, animal sounds, music, and so on).",
"We use mel-frequency cepstrum coefficients (MFCCs) as an acoustic baseline.",
"We show that: (1) Self-supervised models trained on speech recordings are better than models trained on acoustic scenes (non-speech) to discriminate speech sounds and to predict human discrimination behaviour (2) They are good at predicting human discrimination behaviour at the stimuli level, but they are worse than neutral acoustic features when we average human results per contrast (3) They show very few native (training) language effect.",
"All our code and data are freely available.",
"2 1 https://docs.cognitive-ml.fr/ perceptimatic/ 2 https://github.com/JAMJU/Sel_ 2 Related work We are not the first to compare speech models' representational spaces with humans.",
"Feather et al. (2019) used metamers as a tool to compare deep neural networks with humans.",
"In a comparison between three speech recognition models, including a fine-tuned wav2vec 2.0 model, Weerts et al. (2021) showed that wav2vec 2.0 was the best at matching human low-level psycho-acoustic behaviour.",
"However, the model exhibited clear differences with respect to humansshowing, for example, heightened sensitivity to band-pass filtering and an under-reliance on temporal fine structure.",
"To perform a comparison at a slightly higher level of speech perception, Scharenborg et al. (2018) visualised a supervised ASR model's internal representations of different speech sounds to investigate its adaptation to new ambiguous phone categories and compare it to humans' behaviour.",
"Multiple datasets containing human behavioural data have been collected and openly released to encourage comparison of models with humans.",
"It is for this reason that the Interspeech 2008 Consonant Challenge (Cooke and Scharenborg, 2008) and the OLLO database (Meyer et al., 2010), containing humans' phone identification behaviour in different paradigms, were created.",
"This is also the case for the datasets making up the Perceptimatic database (Millet et al., 2019; Millet and Dunbar, 2020a,b; Millet et al., 2021) that we employ in this article, which were individually used to study less well-performing models than the ones we use here.",
"More than just informing us on the kind of information speech models learn, comparing them with humans can have a broader impact on our knowledge of how human perceive speech, and how they learn to do so.",
"Schatz et al. (2021) showed, for example, that a simple self-supervised speech model reproduces the reduced sensitivity to the English [r]/[l] contrast when trained on Japanese speech recordings.",
"Pointing to the fact that the model used lacks abstract phone categories, the authors proposed an alternative to standard explanations of early phonetic learning in infants, as theories about this phenomenon rely heavily on the notion of phone categories.",
"With a similar method, Matusevych et al. (2020) tested the ability of various self-supervised speech models to reproduce infants' discrimination behaviour in multiple languages for a small set of supervised_models_perception_biases 7592 pairs of sounds.",
"However, no quantitative comparison with behavioural data was made.",
"Within the same test framework, Schatz and Feldman (2018) showed that a neural network trained to perform phone recognition was better at qualitatively reproducing Japanese and English native speakers' discrimination behaviour than an HMM-GMM model, focusing once again on the [r]/[l] pair of sound and also on vowel length differences.",
"In this paper, we decide to:",
"(i) evaluate different self-supervised speech models on more contrasts than these previous works",
"(ii) directly compare their results with human behaviour",
"(iii) measure models' similarity to humans at the stimuli level on top of doing it at the contrast level.",
"Our probes of human speech perception use ABX phone discrimination tests, in which participants hear three speech extracts: A, B and X (an A/B/X triplet ).",
"A and B always differ in exactly one phone, and X is always (a distinct recording of) the same sequence of phones as either A or B (for example, A: /pap/, B: /pip/, X: /pap/).",
"We ask the participants to indicate which of the first two sounds (A or B) is the most similar to the last sound (X).",
"The ability of the participants to select the correct ( target ) rather than the distractor ( other ) speech extract indicates how well the population tested can discriminate the two phone categories p 1 and p 2 that target and other belong to (in our example, /i/ and /a/).",
"We call p 1 : p 2 a contrast .",
"In this paper, we examine the results of monolingual Frenchand English-speaking participants.",
"As in previous works (Millet et al., 2019; Millet and Dunbar, 2020a,b; Millet et al., 2021), to test models in the same way as participants, we extract a representation M for each of the three stimuli making up each A/B/X triplet in the experiment.",
"We compute, for a triplet target / other /X, each model's -value: = DT W ( M other , MX ) DT W ( M target , MX ) (1) with DT W being a distance obtained using dynamic time warping to aggregate a frame-level co-sine distance along the warping path.",
"The larger (more positive) the -value obtained, the better the model is at discriminating the target and other phone categories.",
"In our comparison between humans' and models' discrimination behaviour, we will generally use the raw -values.",
"The accuracy of the model on a specific triplet, independent of human listeners' behaviour, can also be computed by considering the model to be correct if the corresponding value is greater than zero and incorrect otherwise.",
"Below, we will refer to this objective accuracy as an ABX score.",
"We compare self-supervised speech models to see if the representational spaces they develop during training on a language resemble humans' perceptual spaces.",
"We choose to test three state-of-the-art self-supervised models: contrastive predictive coding (CPC), the basis for the current best-performing systems on the Zero Resource Speech Challenge evaluation (Dunbar et al., 2021); wav2vec 2.0; and a HuBERT model.",
"These last two models obtain excellent word error rates on the task of semi-supervised speech recognition (self-supervised pretraining plus supervised fine-tuning on a small cor-pus).",
"As we use behavioural data from French and English-speaking participants, models are trained on either French or English recordings.",
"To test for the impact of training on speech recordings compared to other types of sounds, we also train the models on recordings of acoustic scenes (non-speech).",
"We choose one specific output layer for each model, using the one that obtains the best result in terms of human similarity.",
"We use classic acoustic features as a baseline, using the first 13 mel-frequency cepstrum coefficients (MFCCs), calculated using LIBROSA , 3 with a window of 25 ms and a stride of 10 ms. We also train DeepSpeech (Amodei et al., 2016) as a supervised reference.",
"We use a light version of a model that uses contrastive predicting coding (CPC: Rivire et al. 2020).",
"This model is smaller than HuBERT or wav2vec 2.0, as it is only made up of 5 convolutions (the encoder) and one LSTM layer (the sequence model).",
"It is trained using a contrastive loss.",
"For a sequential input x = ( x 1 , ...x t , ..., x T ) , at time t , given the output of the sequential model, the loss 3 https://librosa.org/ 7593 pushes the model to distinguish the K next outputs of the encoder in the future from randomly sampled outputs from another part of x .",
"The detailed loss can be found in Appendix A. We use the output of the sequence model as representations for the CPC model.",
"We test wav2vec 2.0 (Baevski et al., 2020).",
"The model is made up of three elements: an encoder, a quantizer, and a decoder.",
"The encoder is made up of five convolutional layers, the quantizer is a dictionary of possible representations, and the decoder is made up of 12 transformer layers.",
"When an input z is given to the quantizer, it outputs the representation q from the dictionary that is the closest to the input.",
"For an input x , wav2vec 2.0 uses the encoder to transform it into z , which is then quantized into q , and in parallel z is directly passed to the decoder to obtain a context representation c .",
"Like the CPC model, wav2vec 2.0 is trained using a contrastive loss L m .",
"Unlike the CPC model, it uses masking.",
"Given a decoder representation of the context around some masked time step t , the loss pushes the model to identify the true quantized speech representation q t from among a set of K +1 quantized candidate representations q Q t including q t and K distractors uniformly sampled from other masked time steps in the same utterance (see Appendix A for details).",
"We analyse the fifth layer of the decoder.",
"We also test a HuBERT model (Hsu et al., 2021a,b).",
"This model uses exactly the same architecture as wav2vec 2.0 (except for the quantizer, which is not used), but with a different objective.",
"Its training relies on an unsupervised teacher h (in our case, a K-means algorithm) that assigns a cluster label to each frame.",
"Formally, we have h ( X ) = Z = [ z 1 , ...z T ] , with z t a C -class categorical variable.",
"HuBERT is trained to guess this cluster assignment for masked and unmasked frames at the same time.",
"The detailed loss can be found in Appendix A. The unsupervised teacher h is initially a K-means clustering on MFCCs.",
"After a round of training using this initial teacher, h is replaced by a K-means model trained on the output of the sixth transformer layer of the model, and training restarts from scratch.",
"We analyse the output of the sixth transformer layer.",
"As a supervised reference system, we test a trained DeepSpeech model (Amodei et al., 2016).",
"This model is not too intensive to train, is known to obtain reasonable ASR results, and has previously been compared to human speech perception (Millet and Dunbar, 2020b; Weerts et al., 2021).",
"We train it to generate phonemic transcriptions.",
"DeepSpeech is composed of two convolutional layers followed by five RNN layers and a fully connected layer.",
"The model is trained using spectrograms as input and a CTC loss, without a language model.",
"We use representations extracted from the fourth RNN layer of the model, as it seems to give the best results, both in terms of absolute phone discriminability and for predicting human behaviour.",
"In order to compare humans' and models' perceptual spaces, we use two metrics: the log-likelihood ( (cid:96)(cid:96) ) of a binary regression model on the experimental responses, and the Spearman's correlation between the average of the model's -values and participants' accuracies averaged within each phone contrast.",
"These allow for predictions at two levels of granularity: the discriminability of individual experimental items ( (cid:96)(cid:96) ) and the overall discriminability of pairs of phones ( ).",
"In the default ( native ) setting, French-trained models are used to predict French-speaking participants' discrimination results, and similarly for English.",
"See below for details.",
"For each model tested (see Section 3.3), we fit a probit regression to predict the binary responses of the participants (coded as correct or incorrect) using as a predictor the values obtained from the model's representational space.",
"In addition to a global intercept, the regression has other predictors to account for various nuisance factors: whether the right answer was A (1) or B (0); the order of the trial in the experimental list; a categorical predictor for the participant; and another for the Perceptimatic subset the result belongs to.",
"We fit the model with an L1 regularisation (lasso).",
"The (cid:96)(cid:96) is obtained from the fitted regression model: the larger (less negative) the (cid:96)(cid:96) , the better the given model's values predict the experimental data; thus, the more similar the model's representational space is to the perceptual space of the experimental participants.",
"We complement the log-likelihood metric with a correlation statistic.",
"We compute the Spearman correlation ( ), a correlation between the ranks of participants' accuracies (using their gradient results if available) and models' -values, both averaged at the level of the phone contrast (zero indicates no correlation, one indicates a perfect monotonic relation).",
"This measure averages out effects of individual A/B/X stimuli below the level of the phone contrast.",
"Beyond global measures of how well models' representational spaces correspond to human listeners' perceptual spaces, we seek to assess how well the models reproduce group differences caused by the participants' native languages.",
"One could think that humans are very good at discriminating all the sounds from their native language, and that they struggle to differentiate all the sounds from other languages.",
"But reality is more complex than that: some contrasts are equally difficult or easy (even if they are not native) to discriminate for different language groups.",
"The only way to study accurately native language biases is to focus on the relative discrimination difficulties shown by different language groups when listening to the same contrasts.",
"We present a method which evaluates the ability of the models to directly predict the relative difficulty of contrasts across the two language groups we have in the dataset we use.",
"In other words, we measure if the models, when trained on French and English, show the same discrimination behaviour differences than Frenchand English-speaking participants.",
"We first normalise the values obtained by each model by dividing by their standard deviation (within model/training condition, across all A/B/X triplets), in order to put the values on the same scale for the two models.",
"We average the normalised values by contrast.",
"We then calculate the overall accuracies for each phone contrast in the listening experiment.",
"We calculate difference scores: for each phone contrast, we subtract an English model's average values from the average value for the corresponding French-trained model.",
"We do the same with the English-speaking and the French-speaking participants' contrast-level accuracy scores.",
"This yields a measure of the native language effect for each phone contrast, for each model, and similarly for the human participants.",
"For each model, we compute a Pearson correlation between its contrast-level native language effects and those of human listeners.",
"The closer the correlation is to one, the better the phone-level native language effects are captured by a given model.",
"Because this score calculates a native language effect independently for the models and for the participants, it is not susceptible to the same confounds as an approach which would derive the native language effect from a comparison of two different (and thus not necessarily comparable) models' fit to the data.",
"Note, however, that the approach we propose is restricted to predicting contrast-level effects of native language.",
"For the human data, we use five experiments from the Perceptimatic benchmark dataset, 4 containing the results of Frenchand English-speaking participants results on ABX phone discrimination experiments.",
"Stimuli come from French, English, Brazilian Portuguese, Turkish, Estonian, and German, and test a variety of contrasts between vowel and consonant sounds, some of which are familiar, and some of which are unfamiliar, to the listeners.",
"The five datasets use different kinds of stimulus triplets, including short three-phone extracts cut from running speech ( Zero Resource Speech Challenge 2017 and Pilot July 2018 datasets), as well as read-speech nonwords, which highlight English consonants and vowels ( Pilot August 2018 ), compare English with French vowels in a crosslinguistic task ( Cogsci-2019 ), or highlight vowel contrasts in a variety of languages ( WorldVowels ).",
"The combined dataset contains 4231 distinct triplets (each of which is sometimes presented to participants in the order target/other/X, sometimes in the order other/target/X), which test 662 phone contrasts, and contains data from 259 French-speaking participants and 280 English-speaking participants (not the same participants for all stimuli).",
"The speech models we use are trained on 600-hour subsets of either the English or the French Com-4",
"See https://docs.cognitive-ml.fr/ perceptimatic/ for access to, and more detailed descriptions of, the data.",
"monVoice datasets (Ardila et al., 2019).",
"To train DeepSpeech as a phone recognizer, the text transcriptions included in CommonVoice are phone-mized using eSpeakNG.",
"5 When English-trained models are used to predict English-speaking participants' results and French-trained for French-speaking participants', we refer to the trained models as nat-cpc , nat-w2v , nat-hub , and nat-deep .",
"To measure the impact of training on speech versus non-speech audio, the self-supervised models are also trained on a 595-hour subset of the Audioset dataset (Gemmeke et al., 2017) containing no human vocalizations.",
"6 We refer to these models as aud-cpc , aud-w2v , and aud-hub .",
"Each dataset is split randomly into train (80%), test (10%) and validation (10%).",
"All recordings are resampled at 16000Hz and transformed into mono channel using sox.",
"7 For the CPC model, we use the Facebook Research implementation 8 with all the default parameters.",
"We train the model for 110 epochs and take the models that present the best loss on the validation set.",
"For wav2vec 2.0, we use the Fairseq Base implementation, 9 using the LibriSpeech configuration.",
"As (Baevski et al., 2020), we train the models for 400k updates and take the model with the best loss on the validation set.",
"For HuBERT, we also use the Fairseq Base implementation 10 and the LibriSpeech configuration.",
"We follow all the training settings of (Hsu et al., 2021a): our first-pass training takes its unsupervised teacher labels from a K-means algorithm with 50 clusters on the MFCCs for 10% of the training set, training for 250k updates.",
"We then extract the representation of the training set from the sixth transformer layer and use these representations to train a new K-means with 100 clusters and re-train the model using these categories as the teacher for 450k updates.",
"We use the model with the best loss on the validation set.",
"Deep-5 https://github.com/espeak-ng/ espeak-ng 6 A complete list of the labels kept can be found in our github: https://github.com/JAMJU/Sel_ supervised_models_perception_biases 7 http://sox.sourceforge.net/ 8 https://github.com/facebookresearch/ CPC_audio 9 https://github.com/pytorch/fairseq/ tree/master/examples/wav2vec 10 https://github.com/pytorch/fairseq/ tree/master/examples/hubert",
"Speech.",
"11 We train the models for 150 epochs (to reach an overfitting point), saving a checkpoint of the model for each epoch.",
"We then take the checkpoint that produces the best result in terms of Phone Error Rate (PER) on the validation set.",
"We use specaugment (Park et al., 2019) to improve the model performance.",
"The French model obtains 7.8% PER on the French test set and the English model obtains 22.75% PER on the English test set.",
"In all graphs, statistical significance of comparisons is evaluated by bootstrapping over participants' results ( N = 10000 ); redundant statistical comparisons are omitted for clarity (i.e. C > A is omitted when C > B and B > A ).",
"Confidence intervals shown are 95% bootstrap intervals.",
"Before using models' representational spaces to predict human discrimination behaviour, we look at how well models discriminate phones in their training language.",
"We use the sign (positive/negative) of the values to calculate the objective accuracy of selecting the target phone ( ABX scores ).",
"For interpretability, we calculate scores only on the subsets of Perceptimatic containing monolingual English and French stimuli which were presented to listeners in their native language ( Zero Resource Speech Challenge 2017 , WorldVowels n and Pilot August ).",
"Results are shown in Table 1. In general, native self-supervised models obtain scores as good as or better than the supervised reference and human listeners, with a small preference for the nat-w2v model.",
"They show a clear improvement over the corresponding models trained on acoustic scenes (non-speech).",
"Certain datasets present more difficulties for the self-supervised models relative to nat-deep notably, the English read-speech nonwords (from the WorldVowels and Pilot August subsets).",
"Further details and comparison of ABX scores between native and non-native settings can be found in Appendix C. 5.2 Predicting human listeners To assess how well self-supervised models' representational spaces match humans' perceptual spaces for speech, we compute the log-likelihood ( (cid:96)(cid:96) ) and the Spearman correlation ( ) metrics over 11 https://github.com/SeanNaren/ deepspeech.pytorch 7596 Zero Vowels PilotA Fr En Fr En En Humans 0.84 0.80 0.80 0.84 0.74 MFCC 0.76 0.77 0.73 0.76 0.88 nat-deep 0.82 0.83 0.75 0.87 0.94 nat-cpc 0.85 0.85 0.67 0.83 0.85 aud-cpc 0.76 0.74 0.55 0.72 0.66 nat-w2v 0.88 0.88 0.71 0.83 0.84 aud-w2v 0.76 0.73 0.53 0.71 0.78 nat-hub 0.87 0.87 0.76 0.83 0.82 aud-hub 0.77 0.78 0.57 0.77 0.74 Table 1: ABX scores on three subsets of the Perceptimatic dataset, each containing a French and an English subset; the larger (closer to one) the better.",
"the entire Perceptimatic dataset (see Section 3.4) in the native-language training condition.",
"Results can be seen in Figure 1. First, we need to note that the models' performance appears to be importantly tied to training on speech, rather than simply on natural audio.",
"Indeed, the models trained on acoustic scenes (non-speech) consistently perform worse than the native-trained models and MFCCs, on both measures.",
"For the (cid:96)(cid:96) metric, nat-w2v does at least as well as, or (for French) somewhat better than, the supervised reference at modelling human listeners' perceptual confusions; most native self-supervised models perform similarly.",
"Self-supervised models appear to learn representational spaces at least as similar to human native listeners' as our supervised phone recogniser when measured in this way.",
"The metric, which correlates models' with humans' average dissimilarity ( or accuracy) for each phone contrast, reveals a different pattern.",
"Here, nat-deep performs best.",
"Furthermore, native self-supervised models perform worse than generic MFCC features.",
"This suggests a component of human speech perception that is poorly captured by self-supervised models at the contrast level.",
"(On some subsetsnotably the WorldVowels set of familiar and unfamiliar vowel contrasts self-supervised models are better than MFCCs, but are still worse than our supervised reference; see Appendix B.)",
"To confirm the difference of result for the contrast level (the metric) and the stimuli level (the (cid:96)(cid:96) metric), we compute the Spearman correlation S p ea r m a n c o rr e l a ti on L og li k e li hood French English Figure 1: Log-likelihood values (top: shorter/higher bars are better) and Spearman correlation (bottom: taller bars are better) for French ( left ) and English participants ( right ).",
"metric at the stimuli level, averaging participants' results over the stimuli, instead of doing it, for models and humans, over contrasts.",
"The results of this analysis can be found in Figure 2. We notice that this new analysis, done at the stimuli level, gives similar results than our log-likelihood metric.",
"This supports the idea that the bad results for the original metric of the self-supervised models we consider are due to the averaging over contrast.",
"To illustrate the comparisons at the level of phone contrasts, in Figure 3 we plot the average accuracy (per contrast) for French-speaking participants results against (left) DeepSpeech trained on French, one of the best-performing models, and (right) wav2vec 2.0 trained on AudioSet (aud-w2v), one of the models that is the least similar to humans.",
"To look for the presence of human-like native language biases, we look at the ability of native models to predict the difference in behaviour between the Frenchand the English-speaking groups (see Section 3.5).",
"Figure 4 (left) shows the native language effect assessed over the entire Perceptimatic datasetthat is, the correlation, at the contrast 7597 S p ea r m a n c o rr e l a ti on a t t h e s ti m u li l e v e l French English Figure 2: Spearman correlation at the stimuli level (taller bars are better) for French ( left ) and English participants ( right ).",
"level, between the differences in across language-training conditions, on the one hand, and the differences in accuracy for the two listener groups, on the other.",
"Nat-cpc is competitive with nat-deep at predicting differences in groups.",
"Nat-hub and nat-w2v , on the other hand, show very native language effect.",
"Figure 4 (right) shows the same analysis, but on only the WorldVowels dataset.",
"The stimuli in this dataset are constructed to specifically induce different discrimination behaviour between the two language groups.",
"Here, nat-deep shows a much better ability to predict native language effects, both in the absolute, and relative to the other models.",
"As this analysis is done at the level of phone contrasts, and not individual stimuli, we could think that as our supervised reference model is trained to produce phonemic transcriptions, it probably gives it a head start at predicting differences in discrimination behaviour driven by phone categories.",
"To look more precisely at this, we compute our na-N a ti v e e ff ec t All WorldVowels Figure 4: Native language effect for each model, the bigger the bar, the better the models capture language specificities in the discrimination behaviour between the two groups.",
"tive effect at the stimuli level instead of the contrast level.",
"The results of this analysis can be seen in Figure 5, for the all dataset and for the WorldVowels subset.",
"Going to the stimuli level reduces radically the native effect measured.",
"This is expected, as the number of participants' result per stimulus is small, and the effect measured on humans is thus very noisy when measured at this level, and therefore harder to reproduce for the models.",
"However, we can notice that our supervised reference and the CPC model are still the ones that exhibit the most native language effect.",
"We showed that the self-supervised models we tested seem to learn representational spaces relevant for predicting human phone discrimination at the stimuli level.",
"However, while humans show 7598 consistent discrimination behaviour for certain contrasts, whatever the stimuli, the self-supervised models we test do not capture systematic effects of contrasts between specific pairs of phones.",
"Unlike our supervised reference, their similarity to human perceptual spaces is limited to capturing the discriminability of specific individual stimuli.",
"The models tested were similar, but wav2vec 2.0 showed a slight advantage for predicting this kind of behaviour.",
"We have also shown that training on speech data is essential to obtaining a human-like perceptual space: for all of our metrics (ABX accuracy or similarity to humans), training on speech leads to better results than training on acoustic scenes (non-speech).",
"This strongly suggests that the benefits of self-supervised speech models comes from learning characteristics of human speech, not simply the fact that they are better general audio features.",
"We speculate that this is not just important to their ability to predict human speech perception and to discriminate phones, but also of their (related) utility for doing downstream tasks such as ASR.",
"What these models learn about speech, however, is not typically language-specificat least, not in the same way that human perception is.",
"Wav2vec 2.0 and HuBERT do not model language-specific differences in human speech perception, and can be seen as modelling a language-neutral or universal speech perception space.",
"Indeed, they exhibit very few native language effect (see Figure 4 and 5).",
"We note that the idea of self-supervised models learning universal speech features is consistent with the fact that models trained on one language, or multilingually, have proven useful for representing speech in unseen languages (Riviere et al., 2020).",
"CPC does capture effects of native language on perception at the contrast level, but to a far lesser extent than our supervised reference when we focus on a subset of Perceptimatic designed to capture important differences in discrimination behaviour for our two groups of participants (WorldVowels).",
"Our CPC model differs from the other models tested in its small size, its causal architecture (wav2vec and HuBERT use transformers), and in that it does not use masking during its training.",
"Its architecture is probably the most biologically plausible of the three self-supervised models we tested.",
"We should note, however, that it does not make it the best predictor of human discrimination behaviour among the three models (see Figure 1 and 2).",
"One possible explanation for the self-supervised models' limitations we observe is insufficiency of training data: the models in question have generally shown good performance on downstream tasks when pre-trained on large amounts of data.",
"We tested this using available pretrained wav2vec and HuBERT models trained on much larger amounts of data.",
"The detailed results can be found in Appendix E. The models show a slight improvement, but, when looking at the statistic at the phone contrast level, they are still worse than MFCCs.",
"Contrary to previous results (Millet and Dunbar, 2020a,b), our supervised reference system is quite good at predicting human discrimination behaviour (in particular at the contrast level), and clearly predicts a native language effect.",
"The main differences in our experiment with (Millet and Dunbar, 2020b) are the type of model (DeepSpeech instead of HMM-GMM), and with (Millet and Dunbar, 2020a) the type of training objective (phone recognition rather than prediction of orthographic text), and the size of the training corpora (we use fewer data).",
"Predicting phones rather than orthography seems to be critical (as we demonstrate in Appendix F), and using a neural network instead of a Bayesian model (HMM-GMM) leads to a more human-like representational space, as already highlighted by (Schatz and Feldman, 2018).",
"Given the advantage supervised phone recognizers show, a different approach to developing more human-like representational spaces in self-supervised models might be the inclusion of tasks or constraints that push them to take into account longer time scales in order to encourage them to construct longer, more phone-like units.",
"This research was supported by the cole Doc-torale Frontires du Vivant (FdV) Programme Bettencourt, by the Connaught Fund and the Arts and Science Tri-Council Bridging Fund, University of Toronto, and by French Agence Nationale de la Recherche grants ANR-17-CE28-0009 (GE-OMPHON), ANR-11-IDFI-023 (IIFR), ANR-11-IDEX-0005 (USPC), ANR-10-LABX-0083 (EFL), ANR-17-EURE-0017 Frontcog, ANR-10-IDEX-0001-02 PSL*, ANR-19-P3IA-0001 PRAIRIE 3IA Institute.",
"This work was performed using HPC resources from GENCI-IDRIS (Grant 20XX-AD011012415)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"How do we know if a particular medical treatment actually works?",
"Ideally one would consult all available evidence from relevant clinical trials.",
"Unfortunately, such results are primarily disseminated in natural language scientific articles, imposing substantial burden on those trying to make sense of them.",
"In this paper, we present a new task and corpus for making this unstructured evidence actionable.",
"The task entails inferring reported findings from a full-text article describing a randomized controlled trial (RCT) with respect to a given intervention, comparator, and outcome of interest, e.g., inferring if an article provides evidence supporting the use of aspirin to reduce risk of stroke , as compared to placebo .",
"We present a new corpus for this task comprising 10,000+ prompts coupled with full-text articles describing RCTs.",
"Results using a suite of models ranging from heuristic (rule-based) approaches to attentive neural architectures demonstrate the difficulty of the task, which we believe largely owes to the lengthy, technical input texts.",
"To facilitate further work on this important, challenging problem we make the corpus, documentation, a website and leaderboard, and code for baselines and evaluation available at http: //evidence-inference.ebm-nlp.com/ .",
"Biomedical evidence is predominantly disseminated in unstructured, natural language scientific manuscripts that describe the conduct and results of randomized control trials (RCTs).",
"The published evidence base is vast and expanding (Bas-tian et al., 2010): at present more than 100 reports of RCTs are published every day, on average.",
"It is thus time-consuming, and often practically impossible, to sort through all of the relevant published literature to robustly answer questions such With respect to < outcome > , what is the reported di erence between patients receiving <A> and those receiving <B> ?",
"Given the critical role published reports of trials play in informing evidence-based care, organizations such as the Cochrane collaboration and groups at evidence-based practice centers (EPCs) are dedicated to manually synthesizing findings, but struggle to keep up with the literature (Tsaf-nat et al., 2013).",
"NLP can play a key role in automating this process, thereby mitigating costs and keeping treatment recommendations up-to-date with the evidence as it is published.",
"In this paper, we consider the task of inferring whether a given treatment is effective with respect to a specified outcome.",
"Typically, this assessment is done relative to other treatment options (i.e., comparators).",
"We assume the model is provided with a prompt that specifies an intervention, a comparator, and an outcome, along with a full-text article.",
"The model is then to infer the reported findings with respect to this prompt (Figure 1).",
"From a healthcare perspective, this inference task is an essential step for automating extraction of actionable evidence from trial reports.",
"From an NLP standpoint, the proposed task can be seen as an instance of natural language inference (Bowman et al., 2015), viewing the article and prompt as the premise and hypothesis, respectively.",
"However, the problem differs in a few important ways from existing NLP formulations.",
"First, the inputs: prompts are brief ( 13.5 words on average), but articles are long ( 4200 words).",
"Further, only a few snippets of the article will be relevant to the label for a given prompt.",
"Second, prompts in this domain are structured, and include only a few types of key information: interventions, comparators, and outcomes.",
"Methods that exploit this regularity are likely to be more accurate than generic inference algorithms.",
"Another interesting property of this task is that the target for an article depends on the interventions and outcome specified by a given prompt.",
"Most articles report results for multiple interventions and outcomes: 67% of articles in our corpus are associated with two or more prompts that have different labels, e.g., indicating that a specific treatment was comparatively effective for one outcome but not for another.",
"As a concrete example from our corpus, infliximab was reported as realizing no significant difference with respect to dysmenorrhea , compared to a placebo .",
"But infliximab was associated with a significant increase in pain killer intake , again compared to placebo .",
"Generally positive words in an article (e.g., improved) will confuse inference models that fail to account for this.",
"One may view these as built-in adversar-ial examples (Jia and Liang, 2016) for the task.",
"A key sub-problem is thus identifying snippet(s) of evidence in an article relevant to a given input prompt .",
"Attention mechanisms (Bahdanau et al., 2014) conditioned on prompts would seem a natural means to achieve this, and we do find that these achieve predictive gains, but they are modest.",
"Existing attention variants seem to struggle to consistently attend to relevant evidence, even when explicitly pretrained using marked rationales.",
"This corpus can facilitate further research in attention variants designed for lengthy inputs (Choi et al., 2017; Yang et al., 2016).",
"In sum, our contributions are threefold.",
"We: (1) formulate a novel task ( evidence inference ) that is both practically important and technically challenging; (2) Provide a new publicly-available corpus comprising 10,000+ evidence prompts, answers, supporting evidence spans, and associated full-text articles ( http:// evidence-inference.ebm-nlp.com ) all manually annotated by medical doctors; (3) Develop baseline algorithms to establish state-of-the-art performance and highlight modeling challenges posed by this new task.",
"The specialized nature of this task necessitates adequate domain knowledge.",
"We thus recruited medical doctors (MDs) via the Upwork platform to perform annotation.",
"Annotators were assigned to one of three mutually exclusive groups, responsible for: (1) prompt generation, (2) prompt and article annotation, and (3) verification.",
"Figure 2 depicts the annotation process schematically; we describe these steps in more detail below.",
"It is important to note that annotation was performed on full-texts, not just abstracts.",
"Evidence relevant to a particular clinical question is quite often only available in the full text.",
"Indeed, in our dataset, the relevant evidence span was marked in the abstract only 40.5% of the time.",
"This first task entails generating questions (or prompts) that are answerable on the basis of a corresponding full-text article describing an RCT.",
"Such prompts concern the comparison of specific interventions with respect to a particular outcome.",
"Specifically, these questions ask whether an article reports that the specified intervention was found (in the described trial) to be significantly more effective than a comparator treatment, with respect to the outcome of interest.",
"Prompt creators were instructed to identify a snippet, in a given full-text article, that reports a relationship between an intervention, comparator, and outcome.",
"Generators were also asked to provide answers and accompanying rationales to the prompts that they provided; such supporting evidence is important for this task and domain.",
"As a concrete example, an example generated prompt for a trial described in (Marre et al., 2009) specifies Proinsulin : insulin ratio as the outcome of interest, liraglutide (1.8 mg) plus glimepiride as the intervention, and rosiglitazone plus glimepiride as the comparator.",
"Liraglutide and rosiglitazone are both drugs that can be used to treat type 2 diabetes.",
"In this case, use of the intervention (liraglutide) was reported to signifi-(1) prompt generation Intervention metronidazole Outcome pre-term birth Finding decreased Comparator placebo (2) independent annotation of prompts (3) verification of prompts, annotations, and rationales Intervention metronidazole Outcome pre-term birth Finding decreased Comparator placebo Patients receiving metronidazole experienced significantly fewer pre-term births than those in the comparison group.",
"cantly decrease the proinsulin to insulin ratio, as supported by the following evidence snippet extracted by the prompt creator: Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo.",
"Trials typically report results for multiple outcomes, and often for more than two interventions.",
"As discussed above, results for these will often differ.",
"For instance, postprandial plasma glucose was another outcome reported in the aforementioned trial report, and placebo plus glimepeiride was considered as another comparator.",
"Therefore, we instructed prompt generators to create multiple prompts for each full-text article.",
"On average, this yielded 4.19 distinct prompts per article.",
"1 Articles may be deemed invalid for a few reasons, chiefly for not describing RCTs.",
"2 Of 3525 articles considered, 1106 were marked invalid (31.4%).",
"The prompt generators provided valid answers and rationales in 95.9% and 97.8% of cases, respectively, as per the verifier.",
"To summarize: prompt creation entails specifying answerable clinical questions, along with answers to these and supporting rationales (evidence snippets from the text).",
"This task is the most laborious step in the annotation process.",
"1 We restricted generators to creating at most five prompts for a given article; prior to imposing this constraint, annotators would sometimes generate > 10 prompts per article.",
"2 We used the RobotReviewer RCT classifier, which improves upon the standard MEDLINE RCT filter (Marshall et al., 2018), but some false positives remain.",
"For this task, annotators were asked to answer prompts on the basis of a particular article.",
"More specifically, given an evidence prompt articulating an intervention, comparator, and outcome (gen-erated as described above), the task was to determine whether the associated article reports results indicating that the intervention significantly increased , significantly decreased , or realized no significant difference , relative to the comparator and with respect to the outcome.",
"The annotator was also asked to mark a snippet of text supporting their response.",
"Annotators also had the option to mark prompts as invalid , e.g., if the prompt did not seem answerable on the basis of the article.",
"Annotations collected in this step are redundant with the classification and rationale independently provided by the prompt generator in the preceding step; this is by design to ensure robust, high-quality annotations.",
"The final task in our annotation process entails a worker verifying the prompts and responses generated in the previous two steps.",
"The verifier is here responsible for checking both whether the prompt (i.e., question) is valid and can be answered from the text, and whether the responses provided are accurate.",
"Verifiers also assess whether the associated supporting evidence provided is reasonable.",
"relevant to making a quality judgment.",
"Nonetheless, this step decidedly improved data quality: 3.8% of prompts, 6.7% of answers, and 7.1% of rationales (supporting evidence snippets) were marked as invalid.",
"All invalid prompts were removed from the corpus; so too were all prompts for which the verifier rejected all answers or all rationales.",
"In an initial pilot round, we acquired annotations on 10 articles, yielding 93 prompts.",
"Three medical doctors (MDs) were tasked with answering these prompts, achieving an agreement of 0.58 (Krippendorf's ).",
"To improve this poor agreement, we provided personalized feedback that addressed systematic issues we observed.",
"Following this feedback, the MDs were asked to re-examine the same set of prompts and update their responses if they felt it appropriate to do so.",
"This resulted in a much improved agreement of =0.84.",
"To verify that this agreement held beyond the specific set with respect to which we provided feedback, we subsequently assigned an additional 113 prompts to the annotators.",
"As measured over these 113 prompts, the three annotators exhibited relatively high agreement between themselves and with the prompt generator (Krippendorf's of 0.75 and 0.80, respectively).",
"We hired 16 doctors from Upwork and split them at random into groups: 10 for prompt generation, 3 for annotation, and 3 for verification.",
"3 In total, we have acquired 10,137 annotated prompts for 2,419 unique articles.",
"For each of these prompts, we have at least two independent sets of labels and associated rationales (supporting snippets).",
"We additionally calculated agreement between prompt generators, annotators and verifiers using Krippendorf's .",
"To calculate this, we converted the verifier's binary labels of valid or not to the label with which they agreed.",
"This yields = 0 .",
"88 .",
"Removing the verifier annotations from the calculation results in = 0.86.",
"Intervention, outcome, and comparator strings contain on average 5.1, 5.3, and 3.4 tokens, respectively.",
"Articles comprise a mean of 4.2k tokens.",
"We provide additional details concerning the dataset in the Appendix.",
"We experimented with a suite of models to establish performance on this task, which we explain below in increasing order of complexity.",
"Majority .",
"Predict the most common class, i.e., no significant difference .",
"Heuristics .",
"This entails two parts: (1) finding the sentence that contains the answer, and (2) interpreting the sentence that possesses the evidence.",
"The first step of this process is achieved through locating the sentence that has the most overlap with words in the outcome, intervention, and comparator.",
"Afterwards, we search for reported p values within the identified sentence, and evaluate whether they seem significant.",
"We provide a detailed description of the heuristics model in Section A of the Appendix.",
"Logistic Regression .",
"A standard logistic regression model trained on top of binary bag-of-words representations of articles and intervention, com-prator and outcome (ICO) frames these are concatenated to form inputs.",
"We use a vocabulary size of 20k (based on frequency of occurrence), thus yielding an input size of 80k.",
"intervention, comparator, and outcome strings accompanying a prompt into vectors i , c , and o , respec-Train",
"tively.",
"Similarly, we encode the article itself into a vector a .",
"We experimented with several encoder options, including simple bag-of-words style encoding (i.e., averaging constituent word vectors) and RNNs.",
"For the latter, we specifically pass a Gated Recurrent Unit (Cho et al., 2014), or GRU, over inputs, yielding hidden states for each article token.",
"In preliminary experiments we found that simple averages over token embeddings worked well for encoding prompts ( i , c and o ), likely because they tend to be quite short.",
"But this encoding works terribly for articles, due to their length.",
"Therefore, we use a GRU to encode articles (uni-directional, as bi-directional added complexity without improving results).",
"In the simplest neural model variant, we simply concatenate the encoded article and ICO frame into a vector [ a ; i ; c ; o ] which is then passed through a feedforward network with a single hidden layer to allow interactions between the prompt and article text.",
"4 As discussed in detail below, we experiment with a variety of attention mechanisms imposed over article tokens.",
"Exploiting the spans of evidence marked as supporting assessments should improve the predictive performance of models.",
"An additional advantage of modeling this explicitly is that models will then be able to provide rationales for decisions (Lei et al., 2016; Zhang et al., 2016; Zaidan et al., 2007), i.e., snippets of text that support predictions.",
"We therefore experiment with model variants that classify input tokens as being relevant evidence (or not) prior to performing inference.",
"We consider both pipeline and joint instantiations of such models.",
"In the former type, the model first identifies spans in the text and then passes these forward to an independent component that makes predictions on the basis of these.",
"In models of the latter type, evidence span tagging and document-level inference is performed 4 We use a linear hidden layer; experiments adding a nonlinearity (ReLU) did not affect results.",
"end-to-end.",
"Evidence snippets are not restricted to sentence boundaries (i.e., are token-wise), but we also consider model variants that relax evidence span tagging to a sentence labeling task (classify-ing sentences as containing any evidence tokens, or not).",
"In either case, which spans are relevant will depend on the prompt assessed.",
"Thus, we consider and contrast variants that condition evidence span prediction on the input prompt.",
"We consider both linear and neural models.",
"For the former, we train two logistic regression models over bag-of-words input representations.",
"The first predicts whether or not a given sentence contains any evidence tokens.",
"Document predictions are then made via a second (independent) logistic regression model that consumes aggregate bag-of-words representations of only those sentences predicted to contain evidence.",
"This is a linear model, and thus does not accommodate interactions; we therefore consider only an unconditioned version.",
"For our pipeline neural model, we first induce vector representations of article sentences via a GRU, and these are then passed through a binary classification layer.",
"To allow interactions between the input prompt and the sentence being classified, we also consider a conditioned variant in which the sentence classification model is provided the induced vector representations of the prompt elements alongside the sentence vector.",
"For end-to-end models, we capitalize on attention mechanisms that learn to focus on (contextu-alized hidden representations of) individual article tokens prior to making a prediction.",
"We consider several variants of attention, and we explore directly pretraining these using the marked evidence spans available in the training data.",
"The simplest attention module we consider is unconditioned; we simply learn weights W that scores hidden states h a output from the article encoder.",
"Concretely, = softmax { w H a } (1) where w R 1 d and H a R d | a | , denoting hidden size by d and article length by | a | .",
"5 We have elided bias terms for presentation.",
"However, the text span relevant to a classification will depend on the prompt under consideration.",
"We thus also consider a conditioned variant of attention.",
"In this version we concatenate the i , c , and o vectors induced by our encoders to the hidden states.",
"Abusing notation a bit, denote the matrix in which we concatenate the i , c , and o vectors to each column in H a by [ H a ; i ; c ; o ] .",
"We then consider an attention variant that passes this concatenated representation through a single hidden layer to score tokens (Figure 4).",
"We consider two ways of converting the provided evidence spans into targets: (1) Imposing a uniform distribution over marked evidence tokens; (2) Setting the target for all marked evidence tokens as",
"1. In both cases we treat the absence of annotations on a token as an implicit negative target (0).",
"It is important to note that the model will see the same article multiple times during training with different evidence span targets, one for each prompt in the train set.",
"The snippet of text that supports a particular assessment naturally depends on the prompt under consideration.",
"Unconditioned attention variants will thus, by construction, be unable to attend exclusively to the relevant spans of text for across all prompts.",
"When training with binary targets, we consider two specifications: one in which the outputs of the attention model are independent (per-token) sigmoids indicating whether or not a word belongs to an evidence span, and another in which attention weights are normalized via a softmax over tokens.",
"The latter is standard, although per-token attention has been previously proposed (Kim et al., 2017).",
"During model development, we used 90% of the train set for training, and the remaining 10% as to monitor performance over epochs.",
"To iteratively assess and refine models during this development phase, we used the standardized validation set.",
"All decisions regarding final experiments to run were made using this validation set, prior to evaluating models on the held-out test set of articles.",
"Results reported in this paper are on the final test set.",
"Note that we report averages for neural models (over five runs) to mitigate noise due to random initialization and fitting.",
"All neural variants were trained up to 50 epochs with a patience of 10 epochs.",
"We monitored performance during training on a nested development set and retained the model that achieved the highest F1 score on this.",
"For GRU encoders we used 32 hidden units.",
"All models for the primary task were trained with batch sizes of 32.",
"We initialized word embeddings to pretrained word vectors induced over a large set of PubMed abstracts (Pyysalo et al., 2013).",
"Given the modest training dataset size, we did not fine-tune these.",
"We use the manually marked supporting snippets as explicit, intermediate supervision for pretraining the attention mechanisms described in 4.2.",
"More specifically, we pretrain the attentional model components for both conditioned and unconditioned attention variants.",
"Concretely, we minimize token-wise binary cross entropy loss with respect to one of the two token-wise targets delineated in the preceding Section.",
"We normalize loss per batch by the number of constituent tokens, using batch sizes of 16.",
"6 We monitor token-wise AUC with respect to the reference evidence span annotations marked in the held-out validation set mention in Section 5.1.",
"We retained the model that achieved the best AUC measured over fifty epochs of attention pretraining (again with a patience of ten) and used these weights as initialization values for fine-tuning the end-to-end inference network.",
"7 6 Memory constraints precluded larger batches.",
"We trained all models with the Adam optimizer using the parameters suggested by (Kingma and Ba, 2014).",
"We trained using PyTorch (Paszke et al., 2017), v 1.0.1.post2 .",
"8 Code for our models and to reproduce our results is available at: https://github.com/jayded/ evidence-inference .",
"Pipeline models first attempt to identify sentences containing evidence.",
"To train these, we generalize token-wise annotations to sentences such that a sentence is labeled 1 if it contains any evidence tokens, and 0 otherwise.",
"We then trained the sentence tagging models described above with these labels, monitoring loss on a nested validation set and retaining the best observed model over 50 epochs.",
"The document-level model subsequently consumes only sentences tagged as relevant.",
"Results on the main task for proposed model variants are reported in Table",
"2. These are averages over five independent runs, to ensure relatively robust measures of model performance.",
"9 The best 8 This is a nightly build, used due to a dependence on recently introduced RNN utilities.",
"9 These models exhibit a fair amount of variance; we report ranges over the validation set in the Appendix.",
"performing model exploits pretrained conditional attention.",
"For the leaderboard we assume a single set of model predictions.",
"To generate these we evaluated models on the test set using the versions that realized the strongest observed performance on the validation set over the aforementioned five runs/initializations.",
"The best performing model (and hence current leader) is the variant that uses pretrained, conditional attention, which aligns with the average results in Table",
"2. Table 4 reports the results here, along with a more standard attentive architecture for context.",
"To highlight the importance of identifying relevant spans to inform predictions, we present results achieved when these are provided directly to models via an oracle' prior to prediction in Table",
"6. Access to this oracle yields a 20+ point jump in F1, indicating that accurately extracting the relevant evidence is critical.",
"Below (Section 6.2) we attempt to elucidate how well (or poorly) attention mechanisms fail to find supporting evidence.",
"A natural question that arises in NLP tasks in which the output depends on both a document and a question (here, a prompt) is: how much does the latter in fact influence model predictions (Kaushik and Lipton, 2018)?",
"We explore this in Table",
"5. Relying only on the prompt (ignoring the article completely) achieves surprisingly strong performance, outperforming a vanilla neural model (sans atten-tion).",
"This is not entirely unreasonable, as certain intervention types will tend to correlate with significant vs insignificant findings, i.e., the prompt Model Precision Recall F1 Best NN 0.531 0.519 0.520 NN (no attention) 0.471 0.439 0.440 prompt 0.344 0.340 0.324 article 0.489 0.468 0.472 Table 5: Average results achieved (macro-averages over five runs) by the neural model when it is provided only the article or only the prompt.",
"itself contains signal.",
"The neural model without attention is likely simply unable to extract meaningful signal from lengthy articles, and so induced representations merely add noise.",
"By contrast, ignoring the prompt severely degrades performance.",
"To provide a sense of how well models are able to identify relevant evidence (i.e., tokens in the supporting snippets marked by annotators), we report token AUCs and evidence masses for all models that assign scores to words.",
"The former captures how well models discriminate evidence from non-evidence tokens in general; the latter measures the relative amount of attention payed to evidence tokens.",
"Concretely we calculate attention mass as a sum of the normalized attention scores assigned to words that belong to reference evidence spans.",
"Thus, e.g., if the evidence token mass were 1, this would mean the model attended to only relevant evidence, ignoring all other tokens.",
"We also experimented with optimizing for this directly during attention pretraining (see Appendix).",
"Aside from the Majority and LR baselines, all of the models explored generate scores encoding token relevance, either explicitly or implicitly.",
"Attentive neural variants induce these by scoring contextualized representations of tokens t , h t for relevance.",
"Pipelined models score sentences, not tokens.",
"For comparison across models, we assign the probability predicted for a given sentence to all of the words that it contains.",
"Note that the maximum evidence token AUC achievable when selecting a sentence is 0.92.",
"Qualitatively, we observe that attention weights often, though not always, square with intuition.",
"In cond-attn+ pretraining cond-attn attn + pretraining attn Model Variants 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 E v i d e n c e t o k e n m a ss Figure 5: Evidence token masses achieved by models on the validation set, after training.",
"a test example wherein the intervention is propofol , the comparator dexmedetomidine , and the outcome Stroke Volume Variation (SVV) and Pulse Pressure Variation (PPV) , the conditioned, pretrained attention model focuses on tokens svv', versus' (suggesting comparison), and p -value indicators (p', 01').",
"This is not surprising given the reasonably high evidence token AUC achieved by this model.",
"Overall, despite conditioning and pretraining attention mechanisms, end-to-end models remain over 20 points behind the oracle variant (Table",
"6. This suggests that the model is failing to suf-ficiently attend to the relevant evidence.",
"Figure 5 supports this conjecture.",
"This shows the total evidence mass realized by models on the validation set (after training for the final task of evidence in-ference).",
"Even pretrained models assign < 14 % of total attention mass to actual evidence tokens.",
"We also explore how token-level discriminative performance varies between pretrained and end-to-end variants (without explicit attention train-ing), and how this changes as learning progresses.",
"Figure 6 plots evidence token AUC over epochs (on the validation set) for attentive model variants (conditioned and unconditioned).",
"We show curves for the case where we use explicit supervision (pretraining; dotted lines) and where relevance is learned only indirectly via the downstream evidence inference objective (no pretraining; solid lines).",
"Interestingly, evidence token AUC reaches maximum values during pretraining (shown as negative epochs) for supervised attention variants, and declines precipitously when the training objective transitions to the downstream task.",
"This suggests a kind of catastrophic forgetting introduced due to shifting objectives.",
"The proposed task is situated at the intersection of information extraction (Cardie, 1997), natural language inference (Bowman et al., 2015), evidence mining (Rinott et al., 2015) and question answering (Harabagiu et al., 2000; Hovy et al., 2000).",
"However, our focus on inferring results from lengthy clinical trial reports pertaining to particular prompts constitutes a unique problem, as discussed in the Introduction.",
"Prior systems have attempted to extract information from articles describing RCTs.",
"For example, ExaCT (Kiritchenko et al., 2010) attempts to extract variables describing clinical trials from articles, and ACRES (Summerscales et al., 2011) ingests extracts key variables from abstracts.",
"Blake and Lucic (2015; 2012) considered the problem of automatically extracting interventions and outcomes in sentences that report direct comparisons.",
"And Mihaila et al. (2013) have proposed annotating and extracting casual statements from biomedical literature.",
"Classifying the modality of statements in scientific literature has also been investigated (Thompson et al., 2008); this relates to identifying evidence.",
"None of these prior efforts attempted to infer the findings concerning the extracted interventions and outcomes, as we do here.",
"We have presented the task of inferring the polarity of comparative results reported in articles describing clinical trials with respect to interventions and outcomes of interest.",
"Such models would render the unstructured evidence currently buried in manuscripts actionable, in turn potentially informing evidence-based care.",
"In addition to the practical import of this problem, the task poses core NLP challenges related to processing lengthy, technical texts, and performing conditional inference over entities within these.",
"Our baseline results establish both the feasibility and difficulty of the task.",
"Very simple baselines (e.g., rule-based methods) perform quite poorly, and modern neural architectures achieve the best results, currently.",
"When models are provided with reference evidence spans from an oracle, they achieve dramatically improved performance.",
"This demonstrates that the key challenge concerns conditionally identifying relevant snippets to inform predictions; attention mechanisms would seem to provide a natural means of allowing the model to learn to focus, and we indeed found that (super-vised) attention provides some predictive gains, but these are relatively modest.",
"The gap between the model that directly consumes only relevant evidence snippets (Table 6) and the best performing end-to-end model is over 20 points in F1.",
"Further, ignoring the article entirely (relying only on the prompt) degrades performance by only 5 points in F1, again suggesting that even the pretrained, conditioned attention variant is not making good use of the relevant evidence contained in articles.",
"The evidence token mass metrics also support this: The best models we have proposed consistently place only 10-15% of the attention mass on tokens actually marked as containing relevant evidence.",
"We are simply not learning to attend well, even with explicit pretraining and conditioning.",
"This motivates a key future research direction: designing more sophisticated attention mechanisms that (conditionally) identify spans of evidence pertinent to a given prompt.",
"We hope this corpus and task provides opportunity to pursue such models.",
"This work was supported by NSF CAREER Award 1750978.",
"We also acknowledge ITS at Northeastern for providing high performance computing resources that have supported this research."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"other",
"other"
] |
[
"Previous math word problem solvers following the encoder-decoder paradigm fail to explicitly incorporate essential math symbolic constraints, leading to unexplainable and unreasonable predictions.",
"Herein, we propose Neural-Symbolic Solver (NS-Solver) to explicitly and seamlessly incorporate different levels of symbolic constraints by auxiliary tasks.",
"Our NS-Solver consists of a problem reader to encode problems, a programmer to generate symbolic equations, and a symbolic executor to obtain answers.",
"Along with target expression supervision, our solver is also optimized via 4 new auxiliary objectives to enforce different symbolic reasoning:",
"a) self-supervised number prediction task predicting both number quantity and number locations;",
"b) commonsense constant prediction task predicting what prior knowledge (e.g. how many legs a chicken has) is required;",
"c) program consistency checker computing the semantic loss between predicted equation and target equation to ensure reasonable equation mapping;",
"d) duality exploiting task exploiting the quasi duality between symbolic equation generation and problem's part-of-speech generation to enhance the understanding ability of a solver.",
"Besides, to provide a more realistic and challenging benchmark for developing a universal and scalable solver, we also construct a new large-scale MWP benchmark CM17K consisting of 4 kinds of MWPs (arithmetic, one-unknown linear, one-unknown non-linear, equation set) with more than 17K samples.",
"Extensive experiments on Math23K and our CM17k demonstrate the superiority of our NS-Solver compared to state-of-the-art methods 1 .",
"Deep neural networks have achieved remarkable successes in natural language processing recently.",
"Although neural models have demonstrated performance superior to humans on some tasks, e.g. reading comprehension (Rajpurkar et al., 2016; Devlin et al., 2019; Lan et al.), it still lacks the ability of discrete reasoning, resulting in low accuracy on math reasoning.",
"Thus, it is hard for pure neural network approaches to tackle the task of solving math word problems (MWPs), which requires a model to be capable of natural language understanding and discrete reasoning.",
"MWP solving aims to automatically answer a math word problem by understanding the textual description of the problem and reasoning out the underlying answer.",
"A typical MWP is a short story that describes a partial state of the world and poses a question about an unknown quantity or multiple unknown quantities.",
"To solve an MWP, the relevant quantities need to be identified from the text.",
"Furthermore, the correct operators along with their computation order among these quantities need to be determined.",
"Therefore, integrating neural networks with symbolic reasoning is crucial for solving MWPs.",
"Inspired by the recent amazing progress on neural semantic parsing (Liang et al., 2017a) and reading comprehension (Chen et al., 2019), we address this problem by neural-symbolic computing.",
"Recently, many researchers (Wang et al., 2017; Huang et al., 2018; Wang et al., 2018b, 2019; Xie and Sun, 2019; Chiang and Chen, 2019), inspired by an encoder-decoder framework (Cho et al., 2014), apply neural networks to solve MWPs by learning the mapping function between problems and their corresponding equations, and achieve remarkable successes.",
"The encoder uses a neural network to represent a problem as a real-valued vector, and the decoder uses another neural network to generate an equation or expression token by token.",
"The main difference among previous methods is the way to decode expressions or equations.",
"However, they only follow the encoder-decoder paradigm but lacking the ability to explicitly incorporate essential math symbolic constraints (e.g. commonsense constants, formulation regularization), leading to unexplainable and unreasonable predictions.",
"Besides, most of them only focus on arithmetic MWPs without any unknown, preventing them from generalizing to various types of MWPs, such as equation set problems.",
"To address the above issues, we propose a novel N euralS ymbolic Solver (NS-Solver), which explicitly and seamlessly incorporates different levels of symbolic constraints by auxiliary learning tasks.",
"Our NS-Solver consists of three main components, a problem reader to encode the math word problems into vector representations, a programmer to generate the symbolic grounded equations, which are executed to produce answers, and a symbolic executor to obtain final results.",
"In addition to the supervised training objective between generated symbolic grounded equations and ground-truth equations, our solver is also optimized by four novel auxiliary objectives that enforce four levels of problem understanding and symbolic reasoning.",
"First, we apply number prediction task to predict both the number quantity and number location in the problem in a self-supervised manner.",
"Second, we deploy commonsense constant prediction task to predict what prior commonsense knowledge (e.g. how many legs a chicken has) is required for our solver.",
"Third, we propose program consistency checker to compute the semantic loss between the predicted program and ground-truth equation to ensure reasonable equation mapping.",
"Finally, we also propose a novel duality exploiting task that exploits the quasi duality between symbolic grounded equation generation and the problem's part-of-speech generation to enhance the understanding ability of our solver.",
"There are some key advantages of our solution.",
"First of all, the above four auxiliary tasks can produce additional training signals, which improves the data efficiency in training and makes our solver more robust.",
"Second, using the predicted constant to constrain the target symbolic table can reduce the search space greatly, which means that our solver can generate correct symbolic grounded equations easier and better.",
"Third, the auxiliary tasks have been proven to help reduce the domain gap between seen and unseen MWPs (Sun et al., 2019, 2020), thus improving the reasoning ability of our solver.",
"Besides, beyond the current large-scale high-quality MWP benchmark that only includes one type of problems, we also construct a large-scale challenging Chinese MWPs dataset CM17K, which contains 4 types of MWPs (arithmetic MWPs, one-unknown linear MWPs, one-unknown non-linear MWPs, equation set problems) with more than 17K samples, to provide a more realistic and challenging benchmark for developing a universal and scalable math solver.",
"Extensive experiments on public Math23K and our proposed CM17k demonstrate the superiority of our NS-Solver compared to state-of-the-art methods in predicting final results while ensuring intermediate equation rationality.",
"Deep learning-based MWP Solvers.",
"Numerous methods have been proposed to tackle the MWP solving task, ranging from rule-based methods (Bakman, 2007; Yuhui et al., 2010), statistical machine learning methods (Kushman et al., 2014; Zhou et al., 2015; Roy and Roth, 2015, 2016; Mitra and Baral, 2016; Huang et al., 2016; Roy and Roth, 2018), semantic parsing methods (Shi et al., 2015; Koncelkedziorski et al., 2015; Huang et al., 2017; Liang et al., 2018a), to deep learning methods (Ling et al., 2017; Wang et al., 2017, 2018b; Huang et al., 2018; Wang et al., 2018a; Xie and Sun, 2019; Wang et al., 2019; Zhang et al., 2020a,b; Qin et al., 2020; Shen and Jin, 2020; Wu et al., 2020; Chen et al., 2021; Hong et al., 2021a,b).",
"However, most deep learning-based methods only follow the encoder-decoder framework without explicitly incorporating essential math symbolic constraints, resulting in some unexplainable and unreasonable predictions.",
"Besides, most of them only focus on arithmetic MWPs, preventing them from generalizing to various types, such as equation set problems.",
"Neural-Symbolic Computing.",
"Neural-symbolic computing has greatly promoted the development of semantic parsing.",
"Jia and Liang (2016); Dong and Lapata (2016); Zhong et al. (2017) applied neural sequence-to-sequence and sequence-to-tree models to semantic parsing with full supervision.",
"Liang et al. (2017b, 2018b) have advanced the state-of-the-art in weakly supervised semantic parsing on knowledge graphs and tabular databases.",
"Al-Problem Reader Programmer Executor Consistency Checker GT Equation Tree 1 2 Math Word Problem Auxiliary Tasks 26 heads and 82 feet....",
"though most of the successes of semantic parsing are limited to structured data sources, it is not expensive for MWPs since it is easy to crawl lots of problems with annotated equations and answers.",
"Therefore, MWP solving can benefit from supervised neural-symbolic computing.",
"Self-Supervised Learning.",
"Self-supervised auxiliary tasks have been widely used in the fields of natural language understanding (Devlin et al., 2019; Lan et al.).",
"Devlin et al. (2019) applied two self-supervised auxiliary tasks, masked LM and next sentence prediction, to improve the understanding ability of BERT by pretraining.",
"ALBERT (Lan et al.) introduces sentence-order prediction task to address the ineffectiveness of the next sentence prediction task in BERT.",
"Hendrycks et al. (2019) show that self-supervised learning can improve model robustness and uncertainty.",
"Dual Learning.",
"Dual learning, first proposed by He et al. (2016), is a reinforcement training process that jointly trains a primal task and its dual task.",
"Then Xia et al. (2017) considered it as a way of supervised learning and designed a probabilistic regularization term to exploit the duality.",
"It has been widely applied in various fields, such as machine translation (He et al., 2016), sentiment classifica-tion (Xia et al., 2017), question answering (Tang et al., 2017), visual question answering (Li et al., 2018), machine reading comprehension (Xiao et al., 2018), and code generation (Wei et al., 2019).",
"To the best of our knowledge, we are the first to exploit the duality in MWPs.",
"Different from previous works, we design a quasi dual learning method between symbolic grounded equation generation and problem's part-of-speech generation to enhance the understanding ability by easing the difficulty of generating problems from symbolic equations.",
"In this section, we present the design of the proposed NS-Solver.",
"Its backbone mainly consists of a problem reader that encodes the math word problems into vector representations, a programmer to generate the symbolic grounded programs in prefix order, and a symbolic executor to obtain final results.",
"The overview of our NS-Solver is visualized in Fig. 1.",
"We first introduce the backbone of our NS-Solver in section 3.1, and then we introduce other auxiliary tasks in section 3.2.",
"Problem Reader.",
"Given a problem text P = { x i } ni =1 processed by number template replacement which maps numeric values in a problem to number templates (e.g., 26 and 82 to n 1 and n 2 in Fig. 1), the problem reader encodes each token x i in the problem text into an embedding e i .",
"In this work, we deploy a two-layer bidirectional GRU to encode each token x i into an embedding e i = h i + h i where h i and h i are from forward and backward GRUs, respectively.",
"Besides, our problem encoder also outputs a problem representation g 0 = h n + h 0 as the initial hidden state of our programmer, where h n and h 0 are the last hidden state of forward and backward GRUs, respectively.",
"Programmer.",
"The programmer takes the output of the problem reader as input and the problem representation as the initial hidden state, and then decodes a problem as a sequence of tokens { y i } mi =1 which are organized as a prefix equation tree.",
"In this work, we deploy a tree-structured decoder (Xie and Sun, 2019) with attention mechanism (Bah-danau et al., 2015) as the backbone of our programmer and modify them with UET representation (Qin et al., 2020) to support more symbols for multiple types of MWPs.",
"In our programmer, the symbolic table consists of four parts.",
"For each problem, the problem-specific symbolic table contains math operators ( + , , , /, , = , ; ), unknown variable ( x and y ), a series of commonsense constants ( 1 , 3 . 14 , etc) predicted by the Commonsense Constant Prediction Task in 3.2, and the problem-specific number templates ( n 1 , n 2 , n 3 , etc).",
"It should be noticed that ; is a special operator with the lowest priority to integrate multiple equation trees as an ensemble equation tree, so that equation set problems can be handled as simple as arithmetic problems.",
"Executor.",
"We deploy sympy 2 , which is a python library for symbolic mathematics, as our symbolic executor for obtaining final results by solving generated equations.",
"The MWP solving task remains challenging since previous methods did not take full advantage of the rich semantics contained in a problem and lacking the ability to explicitly incorporate essential math symbolic constraints.",
"In this section, we introduce four auxiliary learning tasks to exploit additional training signals obtained from different tasks and exploit the result of the commonsense constant prediction task to explicitly constrain the constant symbolic table, which can reduce the search space for symbolic generation and ease the difficulty of generating correct constant.",
"Self-supervised Number Prediction (SNP) Tasks.",
"If a solver can fully understand the problem semantics, it should be able to identify the quantity of numbers in a problem (i.e., to count how many numeric values are in the problem) and 2 https://www.sympy.org/ their corresponding locations in the problem text accurately.",
"For example, if the solver can understand the problem in Fig. 1, it should be able to predict there are two numbers( 26 and 82 ) in the problem, and their positions are 15 and 18, respectively.",
"Thus, number quantity prediction and number location prediction are two critical self-supervised tasks to help the problem reader fully understand the problem semantics and measure the ability of problem understanding of a solver.",
"Both two number prediction tasks take the mean of the problem encoder's outputs { e i } ni =1 as their input and apply a single-layer feed-forward neural network to compute the distribution of number quantity and number locations.",
"The training objectives of two tasks for each problem are formulated as: LNQP = Q (cid:88) i =1 qt i log p ( q i | P ) , LNLP = L (cid:88) i =1 lt i log p ( l i | P ) .",
"(1) where LNQP and LNLP denote the loss for the Number Quantity Prediction (NQP) task and Number Location Prediction (NLP) task, respectively.",
"Q and L are the maximum possible quantities of number and maximum possible number locations for a problem at the dataset level.",
"qt i and lt i represent the ground-truth value on i -th index of the output probability distribution of NQP and NLP, respectively.",
"Commonsense Constant Prediction (CCP) Task.",
"Commonsense constants are important for solving some MWPs while most previous methods only consider the constants 1 and 3.14, which are not enough for a solver to solve problems that need other commonsense constants.",
"However, attaching a lot of constants to the problem-specific symbolic table will enlarge the search space, increasing the difficulty of generating rational symbolic equations.",
"Therefore, we propose a commonsense constant prediction task to predict what prior commonsense knowledge (e.g. a chicken has 2.0 legs and a rabbit has 4.0 legs for the problem in Fig. 1) is required for the solver to solve a problem according to the problem context.",
"In this way, we can reduce the search space greatly, thus improving the performance of our solver.",
"Similar to the number prediction tasks, the commonsense constant prediction task takes the mean of the problem encoder's output { e i } ni =1 as their input and apply a single-layer feed-forward neural network to compute the distribution of number quantity and number locations The training objective for each problem is formulated as: LCCP = C (cid:88) i =1 ct j log p ( c i | P ) .",
"where C is the total number of constants in the symbolic table and ct i represents the true value on i -th index of the output probability distribution.",
"Since it is impossible for the commonsense constant prediction task to achieve 100% accuracy, in addition to the predicted constants, we add three extra constants that are not predicted but with the highest probability into the symbolic table, making a better trade-off between the size of the search space and prediction accuracy.",
"Program Consistency Checker (PCC).",
"Although a problem can be solved by multiple equivalent but different equations, the predicted equations should be consistent with label equations as much as possible in the supervised learning setting.",
"Therefore, we propose a program consistency checker to check the symbolic program consistency and regularize the model by computing semantic loss between the predicted symbolic program and ground-truth equation to ensure the reasonable symbolic equation mapping.",
"Let y i and y i represent the predicted symbol and ground-truth symbol, p i represents the probability of y i , the semantic loss is obtained by computing a distance between the predicted distribution and ground-truth distribution as: LPCC = log (cid:88) i (cid:89) y i = y i p i (cid:89) y i (cid:54) = y i (1 p i ) .",
"Duality Exploiting (DE) Task.",
"Many previous works (He et al., 2016; Xia et al., 2017; Xiao et al., 2018; Wei et al., 2019) have shown promising results by dual learning framework.",
"Although intuitively, MWP solving and MWP generation are related to each other, i.e., the input of MWP solving is the output of MWP generation, and vice versa, it is very hard for the MWP generation task to generate good enough problems only by the equations without any topic information.",
"Therefore, we propose a duality exploiting task to enhance the understanding ability of our solver by exploiting the quasi duality between symbolic grounded equation generation and the problem's part-of-speech generation.",
"Given a pair of a problem and its corresponding equations ( P , T ), and P (cid:48) is the part-of-speech of P 3 , the training objective of the duality exploiting task is formulated as: L dual = (cid:2) log p ( P (cid:48) ) + log p ( T | P ) log p ( T ) log p (cid:0) P (cid:48) | T (cid:1)(cid:3) 2 .",
"where p ( P (cid:48) ) and p ( T ) are marginal distributions, which can be modeled by their LSTM (Hochreiter and Schmidhuber, 1997)-based language models, respectively.",
"Besides, we deploy a tree-structure encoder inspired by GTS (Xie and Sun, 2019) to encode equations in prefix for POS generation.",
"Given the training dataset D = { ( P i , T 1 ) , ( P 2 , T 2 ) , , ( PN , TN ) } , where T i is the universal expression tree of problem P i , we minimize the following loss function for our NS-Solver:",
"L = (cid:88) ( P,T ) D [ L ent 1 + 1 L dual + 2 LPCC + 3 ( LNQP + LNLP ) + 4 LCCP ] .",
"(5) where L ent 1 = log m (cid:89) t =1 prob( y t | P ) (6) where m denotes the size of T, and y t denotes the t-th output.",
"{ i } 4 i =1 are empirical values that will be detailed in Section 4.2.",
"LPOS = (cid:88) ( P (cid:48) ,T ) D [ L ent 2 + 5 L dual + 6 L PCC (cid:48) ] .",
"(7) where L ent 2 = log n (cid:89) t =1 prob( x t | T ) (8) where n denotes the size of P, and x t denotes the t-th output.",
"LPCC (cid:48) is the semantic loss between predicted POS and the ground-truth POS.",
"{ i } 6 i =5 are empirical values that will also be detailed in Section 4.2.",
"3 We use Jieba (https://github.com/fxsjy/jieba) to generate the POS of a problem.",
"Most public MWPs datasets are quite small such as ALG514 or exist some incorrect labels such as Dolphin18K.",
"An exception is the Math23K dataset, which contains 23161 problems labeled well with structured equations and answers.",
"However, it only contains one-unknown linear math word problems, which is not sufficient to validate the ability of a math solver about solving multiple types of MWPs.",
"Therefore, we introduce a new high-quality math word problems dataset, called CM17K, to validate the universality of a solver and provide a more realistic and challenging benchmark for developing a universal and scalable math solver.",
"We collect CM17K from two education websites 4 .",
"These problems are oriented grades 6-12, containing 4 types of MWPs with more than 17K samples, including 6215 arithmetic MWPs, 5193 one-unknown linear MWPs, 3129 one-unknown non-linear MWPs, and 2498 equation set problems.",
"It should be noticed that our dataset is sufficient for validating the universality of math word problem solvers since these problems can cover most cases about MWPs.",
"We label our data with structured equations and answers following Math23K (Wang et al., 2017).",
"We split our CM17K into train/valid/test sets at a ratio of 8:1:1.",
"The data statistics of Math23K and CM17K are shown in Table 1.",
"From the statistics, we can see that all statistics of CM17K are larger than Math23K.",
"This shows that our dataset is more challenging and difficult for math word problem solvers.",
"Besides, since CM17K contains more types of MWPs than Math23K, CM17K is more suitable 4 http://www.zxxk.com/ and http://www.jyeoo.com/ for validating the reasoning ability of a solver than Math23K.",
"We conduct experiments on Math23K and our CM17K.",
"The main state-of-the-arts to be compared are as follows: DNS (Wang et al., 2017) is a universal solver based on the seq2seq model with significant number identification (SNI).",
"GTS (Xie and Sun, 2019) is a goal-driven tree-structured MWP solver.",
"StackDecoder (Chiang and Chen, 2019) is an universal semantically-aligned math word problems solver.",
"(Zhang et al., 2020a) is an enhanced GTS with teacher-student distillation and multi-decoder ensemble.",
"Again, following prior works (Wang et al., 2017; Chiang and Chen, 2019; Xie and Sun, 2019), we use answer accuracy as the evaluation metric: if the calculated value of the predicted equation tree equals to the true answer, it is thought as correct since the predicted expression is equivalent to the target expression.",
"We use Pytorch 5 to implement our model on Linux with an NVIDIA RTX2080Ti GPU card.",
"All those words with fewer than 5 occurrences are converted into a special token UNK.",
"The size of word embed-dings and all hidden states for other layers are set as 128 and 512, respectively.",
"Our model is optimized by ADAM optimizor (Kingma and Ba, 2015) with 1 = 0.9, 2 =0.999, and (cid:15) = 1 e 8 .",
"The mini-batch size is set as 32.",
"The initial learning rate is set as 1 e 3 and then decreases to half every 40 epochs.",
"To prevent overfitting, we set dropout rate as 0.5 and weight decay as 1 e 5 .",
"Finally, we conduct greedy search to generate symbolic equation trees.",
"We set 1 , 2 , 3 , 5 , and 6 as 0.0005, 0.01, 1.0, 0.005, and 0.1 for both datasets, respectively.",
"We set 4 as 0.000001 for Math23K while we set 4 as 1.0 for CM17K.",
"All constants are extracted from the training set.",
"In each epoch, all training data is shuffled randomly and then cut into mini-batches.",
"Following prior works (Wang et al., 2017; Chiang and Chen, 2019; Xie and Sun, 2019), we conduct 5-fold cross-validation on Math23K.",
"For CM17K, we evaluate the performance on the test set.",
"The results are shown in Table",
"2. From Table 2, we can observe 5 http://pytorch.org that benefiting from the four new auxiliary tasks and neural-symbolic paradigm, our NS-Solver outperforms the baselines on both datasets in terms of answer accuracy.",
"Specifically, for Math23K and CM17K, the accuracy gains of NS-Solver over GTS are 1.37% and 5.93%, respectively.",
"Comparing with TSN-MD, our solver outperforms it by about 0.6% on Math23K.",
"It shows that our model is more feasible for solving multiple types of MWPs.",
"It also shows that our NS-Solver is more effective than other state-of-the-art models on the real-world scenario that needs to solve various MWPs with a unified solver.",
"We drill down to analyze the generalization of DNS, GTS, and NS-Solver on different types of MWPs in the test subset of CM17K.",
"Their answer accuracy on different types of MWPs is shown in Table",
"3. We can observe that our NS-Solver outperforms the other two models by a large margin on all subsets.",
"Specifically, the accuracy gains of our NS-Solver over GTS on four subsets are 3.87%, 9.12%, 6.99%, and 9.44%.",
"This shows that with the help of four auxiliary tasks, our NS-Solver obtains better generalization ability on multiple types of MWPs than baselines.",
"Intuitively, the size of the symbolic equation tree is proportional to the complexity of the mathematical relationship in the problem.",
"The more complex the mathematical relationship is, the more difficult it is to solve the problem.",
"Here, we compare our proposed NS-Solver with GTS on CM17K to show the superiority of our NS-Solver on different equation tree sizes.",
"The answer accuracies for different sizes of expression trees on CM17K test subset are shown in Fig.",
"2. We can see that there is a tendency Figure 2: Answer accuracies for different sizes of symbolic equation trees on CM17K.",
"for answer accuracy to degrade with the growth of the problem complexity measured as the size of the equation tree, and our NS-Solver outperforms GTS on most cases of different equation tree sizes.",
"This shows our NS-Solver can better model the mathematical relationships of the problem than GTS.",
"It can also be noticed that the improvement of our NS-Solver over the GTS is increasing when the problems become more complex.",
"However, although our model outperforms other methods, there still has room for improvement in semantic understanding and symbolic reasoning since longer equations often match with more complex MWPs which entail more complex math relationships.",
"We study the contribution of different auxiliary tasks of our NS-Solver.",
"For this purpose, we consider five different combinations: 1) only the backbone [NS-Solver CCP SNP PCC DE]; 2) backbone + duality exploiting task [NS-Solver CCP SNP PCC]; 3) backbone + duality exploiting task + program consistent checker [NS-Solver CCP -SNP]; 4) backbone + duality exploiting task + program consistent checker + number prediction tasks [NS-Solver CCP]; and 5) the proposed NS-Solver [NS-solver].",
"For each of these combinations, each model was trained for 80 epochs on CM17K and validated on its test subset.",
"The learning rate decreased to half every 20 epochs.",
"The results are provided in Fig.",
"4. As one can see, all four auxiliary tasks can improve performance.",
"Specifically, the accuracy gains of DE, PCC, SNP, and CCP are 1.00%, 1.41%, 1.11%, and 1.12%, respectively.",
"Besides, the binary accuracies of the two SNP tasks are 97% (number quantity prediction) and 96.8% (number location prediction).",
"Moreover, the accuracy of our CCP Case 1: NUM(n 0 [5]) NUM (n 1 [12]) NUM(n 2 [240])",
"task is 97.8%.",
"This shows that our auxiliary tasks can enhance our NS-Solver to enforce better problem understanding and symbol reasoning.",
"Overall, our proposed NS-Solver achieves the best answer accuracy.",
"We also present the results of our NS-Solver with different combinations of four auxiliary tasks in Fig.",
"3. Benefiting from explicitly exploiting the probabilistic correlation between two quasi dual tasks to regularize the training process in our duality exploiting (DE) task, our [NS-solver CCP SNP PCC] can generate correct equations by understanding the problem better while [NS-solver CCP SNP PCC DE] generates error equations, as shown in Case 1 .",
"With the program consistency checker (PCC) that effectively regularizes the model's output by constraining the distance between predicted symbols and ground-truth symbols during training, [NS-solver CCP SNP] can generate more consistent equations with the ground-truth than [NS-solver CCP SNP PCC], as shown in Case 2 .",
"With self-supervised number prediction (SNP), [NS-solver CCP] can generate better results and avoid generating symbols that do not belong to the problem, as shown in Case 3 .",
"With commonsense constant prediction (CCP), our NS-Solver manages to choose correct constants by constraining the constant symbolic table using predicted results of CCP.",
"As shown in Case 4 , [NS-solver CCP] chooses error constant 10 while NS-solver chooses two correct constants.",
"Besides, although GTS and NS-Solver generate the same symbols sometimes, our NS-Solver generates correct equations with the help of our four auxiliary objectives, as shown in Case 5 .",
"Overall, all four auxiliary tasks can improve our NS-Solver's understanding and reasoning ability.",
"Model BERT + Tree Decoder (Xie and Sun, 2019) NS-Solver + BERT CM17K 55.0% 60.68% Table 4: Generalization to different backbone 4.7 Extends to other backbone To show that our auxiliary tasks can be adapted to other backbones, we replace GTS's encoder with BERT ( BERT + Tree Decoder ) and NS-Solver's encoder with BERT ( NS-Solver + BERT ), where we adopt a Chinese BERT-base pre-trained with whole word masking (Cui et al., 2020).",
"We conduct experiments on CM17K.",
"The results are shown in Table",
"4. We can observe that with auxiliary tasks, our NS-Solver + BERT still can outperform BERT + Tree Decoder , which shows that our auxiliary tasks' strong generalization.",
"In this work, we propose Neural-Symbolic Solver (NS-Solver) to explicitly and seamlessly incorporate different levels of symbolic constraints by four auxiliary tasks.",
"Our NS-Solver consists of a problem reader to encode problems, a programmer to generate a symbolic grounded program, and a symbolic executor to obtain final results.",
"In addition to supervised learning with target expression, our solver is also optimized via four new auxiliary objectives that enforce four levels of symbolic reasoning.",
"Besides, we also construct a new dataset CM17K containing 4 types of MWPs with more than 17K samples, which provides a more realistic and challenging benchmark for developing a universal and scalable math solver.",
"Extensive experiments on Math23K and CM17K demonstrate the superiority of our NS-Solver compared to state-of-the-art methods in answer accuracy while ensuring intermediate equation rationality.",
"We collected CM17K from two online education websites, which is only used for academic research, and the copyright belongs to the original websites.",
"This work may inspire research in the field of numerical reasoning.",
"Acknowledgements This work was supported in part by National Key R&D Program of China under Grant No.2020AAA0109700, National Natural Science Foundation of China (NSFC) under Grant No.U19A2073, No.61976233 and No. 61836012, the Natural Science Foundation of Guangdong Province under Grant No. 2017A030312006, Guangdong Province Basic and Applied Basic Research (Regional Joint Fund-Key) Grant No.2019B1515120039, Shenzhen Fundamental Research Program (Project No.RCYX20200714114642083 and No.JCYJ20190807154211365), Zhijiang Lab's Open Fund (No.2020AA3AB14), CSIG Young Fellow Support Fund, and Guangdong Provincial Key Laboratory of Information Security Technology."
] | [
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"objective",
"method",
"objective",
"objective",
"abstain",
"objective",
"method",
"result",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"objective",
"result",
"objective",
"objective",
"objective",
"method",
"abstain",
"other"
] |
[
"Natural language processing (NLP) tasks, ranging from text classification to text generation, have been revolutionised by the pretrained language models, such as BERT.",
"This allows corporations to easily build powerful APIs by encapsulating fine-tuned BERT models for downstream tasks.",
"However, when a fine-tuned BERT model is deployed as a service, it may suffer from different attacks launched by the malicious users.",
"In this work, we first present how an adversary can steal a BERT-based API service (the victim/target model) on multiple benchmark datasets with limited prior knowledge and queries.",
"We further show that the extracted model can lead to highly transferable adversarial attacks against the victim model.",
"Our studies indicate that the potential vulnerabilities of BERT-based API services still hold, even when there is an architectural mismatch between the victim model and the attack model.",
"Finally, we investigate two defence strategies to protect the victim model, and find that unless the performance of the victim model is sacrificed, both model extraction and adversarial transferability can effectively compromise the target models.",
"Recently, owing to the success of pretrained BERT-based models (Devlin et al., 2018; Liu et al., 2019), the downstream NLP tasks have been revolutionised in the form of the limited task-specific supervision via fine-tuning on BERT models.",
"Meanwhile, commercial task-oriented NLP models, built on top of BERT models, are often deployed as pay-per-query prediction APIs for the sake of the protection of data privacy, system integrity and intellectual property.",
"As publicly accessible services, commercial APIs have become victims of different explicit attacks, such as privacy attack (Lyu et al., 2020a,b; Shokri et al., 2017), adversarial attack (Shi et al., 2018), etc.",
"Recently, prior works have also found that with the aid of carefully-designed queries and outputs of the NLP APIs, many existing APIs can be locally imitated via model extraction (Krishna et al., 2019; Wallace et al., 2020), which raises concerns of the vulnerability of NLP APIs.",
"For instance, competing companies can imitate the victim model with a negligible cost.",
"Since the considerable investment of data annotation and algorithm design are sidestepped, the competing companies would be able to launch an identical service with a more competitive price than the victim companies.",
"Such security issue can be exacerbated, when the back-end pertained models, such as BERT, are publicly available (Krishna et al., 2019).",
"Beyond model extraction, we further demonstrate the adversarial examples crafted by the extracted model could be transferred to the black-box victim model.",
"From the perspective of commercial competition, if the competitors manage to predicate incorrect predictions of the victim services, they can launch an advertising campaign against the victim model with these adversarial examples.",
"In summary, we investigate the vulnerabilities of publicly available NLP classification APIs through a two-stage attack.",
"First, a model extraction attack is issued to obtain a local copy of the target model.",
"Then, we conduct adversarial attacks against the extracted model, which is empirically transferable to the target model.",
"To patch these vulnerabilities, we mount two basic defence strategies on the victim models.",
"The empirical results show that without corrupted predictions from the victims, model extraction and adversarial example transferability are resilient to the defence.",
"Our results spotlight the risks of using pretrained BERT to deploy the APIs through the lens of model extraction attack and adversarial example transfer attack.",
"Such attacks can be conducted at a cost of as little as $7.1.",
"1 1 Code is available at https://github.com/ xlhex/extract_and_transfer 2 Related Work 2.1 Model Extraction Attack (MEA) Model extraction attacks (also referred to as model stealing\") have been effectively applied to different tasks, ranging from computer vision tasks (Orekondy et al., 2019) to NLP tasks (Chan-drasekaran et al., 2020).",
"In a nutshell, model extraction enables malicious users to forge the functionality of a black-box victim model as closely as possible.",
"The activity seriously causes the intellectual property infringement.",
"Additionally, the follow-up attacks can be facilitated as the aftermath of the model extraction.",
"Particularly, an adversarial attack can be built upon the extracted model, which is able to enhance the successful rate of fooling the victim model.",
"As a byproduct of the adversarial attack, it has been shown that adversarial transferability encourages a transition of the adversarial examples from one model to other models (Liu et al., 2016; Papernot et al., 2017), especially in computer vision research.",
"Although such property has been explored by a few recent works in NLP systems (Sun et al., 2020; Wallace et al., 2020), it remains largely unexplored for the BERT-based APIs, and whether the transferability could succeed when the substitute (extracted) model and the victim model have different architectures.",
"Our attacks against BERT-based APIs consist of two phases, Model Extraction Attack (MEA) and Adversarial Example Transfer (AET), as depicted in Figure",
"1. 3.1 Model Extraction Attack (MEA) In the first phase, we assume that a victim model M v is commercially available as a prediction API for target task T .",
"An adversary attempts to reconstruct a local copy M e (extracted model) of M v via querying M v .",
"Our goal is to extract a model with comparable accuracy to the victim model.",
"Generally, MEA can be formulated as a two-step approach, as illustrated by the left figure in Figure 1:",
"1. Attackers craft a set of inputs as queries, then send them to the victim model (BERT-based API) to obtain predictions; Dataset #Train #Dev #Test Task TP-US 22,142 2,767 2,767 sentiment analysis Yelp 520K 40,000 1,000 sentiment analysis AG 112K 1,457 1,457 topic classification Blog 7,098 887 887 topic classification Table 1: Statistic of sentiment analysis and topic classification datasets.",
"For each query x i , M v returns a K -dim posterior probability vector y i [0 , 1] k , with (cid:80) k y ki = 1 .",
"The resulting dataset { x i , y i } mi =1 by m queries is used to train M e .",
"We assume that the attacker fine-tunes the public release of f bert , on this dataset, with the objective of imitating the behaviour of M v .",
"Once the local copy of M e is obtained, the attacker no longer needs to pay the original service provider.",
"In the second phase, we leverage the transferability of adversarial examples: we first generate adversarial examples for the extracted model, then transfer the generated adversarial examples to the victim model.",
"The intuition of the experiment is based on the transferable vulnerabilities crossing the models the adversarial examples generated by the extracted model are transferable to the victim model.",
"Here we use the extracted model to serve as a surrogate to craft adversarial examples in a white-box manner.",
"Such attack aggravates the vulnerabilities of victim models.",
"To evaluate the efficacy of the proposed attacks, we select four NLP datasets covering two main tasks,",
"i) sentiment analysis and",
"ii) topic classification.",
"We use TP-US from Trustpilot Sentiment dataset (Hovy et al., 2015) and YELP dataset (Zhang et al., 2015) for sentiment analysis.",
"We use AG news corpus (Del Corso et al., 2005) and Blog posts dataset from the blog authorship corpus (Schler et al., 2006) for topic classification.",
"We refer readers to Appendix A for more details about the pre-processing of these datasets.",
"available pretrained BERT.",
"Once the victim model is task-specifically fine-tuned by following Section 3.1, it can be queried as a black-box API.",
"Afterwards, the extracted model can be obtained through imitating the victim model.",
"Following Krishna et al. (2019), the queries start from the size of 1x to that of victim's training set, then scale up to 5x.",
"We test the accuracy of the victim model and the extracted model on the same held-out set for a fair comparison.",
"Query Distribution: To examine the correlation between the query distribution ( DA ) and the effectiveness of our attacks on the victim model trained on data from DV ( c.f., Table 1), we explore the following two different scenarios: (1) we use the same data as the original data of the victim model ( DA = DV ).",
"Note that attackers have no true labels of the original data; (2) we sample queries from different distribution but same domain as the original data ( DA (cid:54) = DV ).",
"Since the owners of APIs tend to use the in-house datasets, it is difficult for the attacker to know the target data distribution as a prior knowledge.",
"Therefore, our second assumption is closer to the practical scenario.",
"As the training datasets of the victims are sourced from either review domain or news domain, we consider datasets from these two domains as our queries.",
"Specifically, we leverage Amazon review dataset (Zhang et al., 2015) or CNN/DailyMail dataset (Hermann et al., 2015) to query the victim models.",
"According to Table 2, we have observed that: 1) the success of the extraction correlates to the domain closeness between the victim's training Model #Q TP-US Yelp AG Blog Victim model 85.5 95.6 94.5 97.1 DA = DV 86.5 95.7 94.5 96.8 DA (cid:54) = DV (review) 1x 85.3 94.1 88.6 88.2 5x 85.8 95.0 91.3 92.8 DA (cid:54) = DV (news) 1x 84.2 91.1 90.5 83.1 5x 85.5 93.1 92.3 87.6 Table 2: Accuracy [%] of the victim models and the extracted models among different datasets in terms of domains and sizes.",
"data and the attacker's queries; 2) using same data even outperforms the victim models, which is also known as self-distillation (Furlanello et al., 2018); 3) albeit the different distributions brought by review and news corpora, our MEA can still achieve 0.85-0.99 victim models' accuracies when the number of queries varies in {1x,5x}.",
"Although more queries suggest a better extraction performance, small query budgets (0.1x and 0.5x) are often sufficiently successful.",
"More results are available in Appendix C. From now on, unless otherwise mentioned, we will use news data for AG news, and review data for TP-US , Blog and Yelp.",
"2 Costs Estimation: We analyse the efficiency of MEA on various classification datasets.",
"Each query is charged due to a pay-as-you-use policy adopted by service providers.",
"We estimate costs for each task in Table 3 according to Google APIs 3 and IBM APIs 4 .",
"Considering the efficacy of model extrac-2 Empirically, we do not have access to the original training data of the victim model.",
"After extracting a black-box victim model into a white-box extracted model, a white-box adversarial attack can be implemented.",
"We first generate adversarial examples on the extracted model, then examine whether these examples are transferable to the target victim model.",
"To evaluate such pseudo white-box attack, we assess it via a transferability metric, which refers to the misclassification rate of adversarial samples on the victim APIs.",
"To generate natural adversarial examples, we follow the protocol (Sun et al., 2020) that leverages the gradients of the gold labels w.r.t the embeddings of the input tokens to find the most informative tokens, which have the largest gradients among all positions within a sentence.",
"Then we corrupt the selected tokens with one of the following typos: 1) Insertion; 2) Deletion; 3) Swap; 4) Mistype: Mistyping a word though keyboard, such as oh 0h; 5) Pronounce: Wrongly typing due to the close pronounce of the word, such as egg agg; 6) Replace-W: Replace the word by the frequent human behavioural keyboard typo based on the Wikipedia statistics (Sun, 2020).",
"In order to understand whether our extracted model manages to improve the transferability, we also launch a list of black-box adversarial attacks in the same manner.",
"Table 4 demonstrates that our pseudo white-box attack makes the victim model more vulnerable to adversarial examples in terms of transferability more than twice effective in the best case, compared to the black-box counterparts.",
"This corroborates our claim that the extracted model, retaining a high-fidelity imitation of the victim model, severely impairs the output integrity of the victim model, indicated as the considerable increase of the transferable examples.",
"(5x",
"v.s.",
"1x) lead to better attack performances.",
"We believe this conspicuous gain attributes to the higher fidelity to the victim model, obtained by a better extraction ( c.f., Table 2).",
"In practice, the adversary may not know the victim's model architecture.",
"Hence we also study the attacking behaviours under the different architectural settings.",
"According to Table 5, when both the victim and the extracted models adopt BERT-large, the vulnerability of the victim is magnified in all attacks, which implies that the model with higher capability is more vulnerable to our attacks.",
"As expected, the efficacy of AET can be alleviated when an architectural mismatch exists.",
"5 5 Defence We next briefly discuss two defence strategies the victim model can adopt to counter these attacks.",
"A higher leads to smoother probability, whereas a lower one produces a sharper distribution.",
"When =0, the posterior probability becomes a hard label.",
"Prediction perturbation (PERT).",
"Another defence method is adding normal noise with variance to the predicted probability distribution.",
"The larger the variance of the noise distribution, the stronger the defence.",
"Table 6 indicates that varying temperature on softmax cannot defend the victim model against MEA, except for =0 (hard label), which can degrade all attacks to some extent.",
"Regarding perturbation, it can achieve a significant defence at the cost of the accuracy of the victim models.",
"Surprisingly, when =0.50, MEA surpasses the victim model.",
"We conjecture that albeit the perturbed post-softmax probability, the extracted model can still acquire certain informative knowledge via model extraction.",
"We will conduct an in-depth study on this in the future.",
"To sum up, both MEA and AET pose severe threats to the BERT-based APIs, even when the adversary merely has access to limited or erroneous predictions.",
"This work goes beyond model extraction from BERT-based APIs, and we also identify the extracted model can largely enhance adversarial example transferability even in difficult scenarios, i.e., limited query budget, queries from different distributions, or architectural mismatch.",
"Extensive experiments based on representative NLP datasets and tasks under various settings demonstrate the effectiveness of our attacks against BERT-based APIs.",
"In the future, we plan to extend our work to more complex NLP tasks, and develop more effective defences.",
"We would like to thank anonymous reviewers for their valuable feedback and constructive suggestions.",
"The computational resources of this work are supported by the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MAS-SIVE) ( www.massive.org.au )."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"other",
"other"
] |
[
"Most previous studies integrate cognitive language processing signals (e.g., eye-tracking or EEG data) into neural models of natural language processing (NLP) just by directly concatenating word embeddings with cognitive features, ignoring the gap between the two modalities (i.e., textual vs. cognitive) and noise in cognitive features.",
"In this paper, we propose a CogAlign approach to these issues, which learns to align textual neural representations to cognitive features.",
"In CogAlign, we use a shared encoder equipped with a modality discriminator to alternatively encode textual and cognitive inputs to capture their differences and commonalities.",
"Additionally, a text-aware attention mechanism is proposed to detect task-related information and to avoid using noise in cognitive features.",
"Experimental results on three NLP tasks, namely named entity recognition, sentiment analysis and relation extraction, show that CogAlign achieves significant improvements with multiple cognitive features over state-of-the-art models on public datasets.",
"Moreover, our model is able to transfer cognitive information to other datasets that do not have any cognitive processing signals.",
"The source code for CogAlign is available at https://github.",
"com/tjunlp-lab/CogAlign.git .",
"Cognitive neuroscience, from a perspective of language processing, studies the biological and cognitive processes and aspects that underlie the mental language processing procedures in human brains while natural language processing (NLP) teaches machines to read, analyze, translate and generate human language sequences (Muttenthaler et al., 2020).",
"The commonality of language processing shared by these two areas forms the base of Corresponding author cognitively-inspired NLP, which uses cognitive language processing signals generated by human brains to enhance or probe neural models in solving a variety of NLP tasks, such as sentiment analysis (Mishra et al., 2017; Barrett et al., 2018), named entity recognition (NER) (Hollenstein and Zhang, 2019), dependency parsing (Strzyz et al., 2019), relation extraction (Hollenstein et al., 2019a), etc.",
"In spite of the success of cognitively-inspired NLP in some tasks, there are some issues in the use of cognitive features in NLP.",
"First, for the integration of cognitive processing signals into neural models of NLP tasks, most previous studies have just directly concatenated word embeddings with cognitive features from eye-tracking or EEG, ignoring the huge differences between these two types of representations.",
"Word embeddings are usually learned as static or contextualized representations of words in large-scale spoken or written texts generated by humans.",
"In contrast, cognitive language processing signals are collected by specific medical equipments, which record the activity of human brains during the cognitive process of language processing.",
"These cognitive processing signals are usually assumed to represent psycholinguistic information (Mathias et al., 2020) or cognitive load (Antonenko et al., 2010).",
"Intuitively, information in these two types of features (i.e., word embeddings and cognitive features) is not directly comparable to each other.",
"As a result, directly concatenating them could be not optimal for neural models to solve NLP tasks.",
"The second issue with the incorporation of cognitive processing signals into neural models of NLP is that not all information in cognitive processing signals is useful for NLP.",
"The recorded signals contain information covering a wide variety of cognitive processes, particularly for EEG (Williams et al., 2019; Eugster et al., 2014).",
"For different tasks, we may need to detect elements in the recorded signals, Figure 1: Neural Architecture of the proposed CogAlign.",
"which are closely related to specific NLP tasks, and neglect features that are noisy to the tasks.",
"In order to address the two issues, we propose CogAlign , a multi-task neural network that learns to align neural representations of texts to cognitive processing signals, for several NLP tasks.",
"As shown in Figure 1, instead of simply concatenating cognitive features with word embeddings, we use two private encoders to separately encode cognitive processing signals and word embeddings.",
"The two encoders will learn task-specific representations for cognitive and textual inputs in two disentangled spaces.",
"To align the representations of neural network with cognitive processing signals, we further introduce an additional encoder that is shared by both data sources.",
"We alternatively feed cognitive and textual inputs into the shared encoder and force it to minimize an adversarial loss of the discriminator stacked over the shared encoder.",
"The discriminator is task-agnostic so that it can focus on learning both differences and deep commonalities between neural representations of cognitive and textual features in the shared encoder.",
"We want the shared encoder to be able to transfer knowledge of cognitive language processing signals to other datasets even if cognitive processing signals are not available for those datasets.",
"Therefore, CogAlign does not require cognitive processing signals as inputs during inference.",
"Partially inspired by the attentive pooling network (Santos et al., 2016), we propose a text-aware attention mechanism to further align textual inputs and cognitive processing signals at the word level.",
"The attention network learns a compatibility matrix of textual inputs to cognitive processing signals.",
"The learned text-aware representations of cognitive processing signals also help the model to detect task-related information and to avoid using other noisy information contained in cognitive processing signals.",
"We present CogAlign that learns to align neural representations of natural language to cognitive processing signals at both word and sentence level.",
"Our analyses show that it can learn task-related specific cognitive processing signals.",
"We propose a text-aware attention mechanism that extracts useful cognitive information via a compatibility matrix.",
"With the adversarially trained shared encoder, CogAlign is capable of transferring cognitive knowledge into other datasets for the same task, where no recorded cognitive processing signals are available.",
"We conduct experiments on incorporating eye-tracking and EEG signals into 3 different NLP tasks: NER, sentiment analysis and relation extraction, which show CogAlign achieves new state-of-the-art results and significant improvements over strong baselines.",
"Eye-tracking for NLP.",
"Eye-tracking data have proved to be associated with language comprehension activity in human brains by numerous research in neuroscience (Rayner, 1998; Henderson and Ferreira, 1993).",
"In cognitively motivated NLP, several studies have investigated the impact of eye-tracking data on NLP tasks.",
"In early works, these signals have been used in machine learning approaches to NLP tasks, such as part-of-speech tagging (Barrett et al., 2016), multiword expression extraction (Ro-hanian et al., 2017), syntactic category prediction (Barrett and Sgaard, 2015).",
"In neural models, eye-tracking data are combined with word embeddings to improve various NLP tasks, such as sentiment analysis (Mishra et al., 2017) and NER (Hollen-stein and Zhang, 2019).",
"Eye-tracking data have also been used to enhance or constrain neural attention in (Barrett et al., 2018; Sood et al., 2020b,a; Takmaz et al., 2020).",
"EEG for NLP.",
"Electroencephalography (EEG) measures potentials fluctuations caused by the activity of neurons in cerebral cortex.",
"The exploration of EEG data in NLP tasks is relatively limited.",
"Chen et al. (2012) improve the performance of automatic speech recognition (ASR) by using EEG signals to classify the speaker's mental state.",
"Hollenstein et al. (2019a) incorporate EEG signals into NLP tasks, including NER, relation extraction and sentiment analysis.",
"Additionally, Muttenthaler et al. (2020) leverage EEG features to regularize attention on relation extraction.",
"Adversarial Learning.",
"The concept of adversarial training originates from the Generative Adversarial Nets (GAN) (Goodfellow et al., 2014) in computer vision.",
"Since then, it has been also applied in NLP (Denton et al., 2015; Ganin et al., 2016).",
"Recently, a great variety of studies attempt to introduce adversarial training into multi-task learning in NLP tasks, such as Chinese NER (Cao et al., 2018), crowdsourcing learning (Yang et al., 2018), cross-lingual transfer learning (Chen et al., 2018; Kim et al., 2017), just name a few.",
"Different from these studies, we use adversarial learning to deeply align cognitive modality to textual modality at the sentence level.",
"CogAlign is a general framework for incorporating cognitive processing signals into various NLP",
"tasks.",
"The target task can be specified at the predictor layer with corresponding task-specific neural network.",
"CogAlign focuses on aligning cognitive processing signals to textual features at the word and encoder level.",
"The text-aware attention aims at learning task-related useful cognitive information (thus filtering out noises) while the shared encoder and discriminator collectively learns to align representations of cognitive processing signals to those of textual inputs in a unified semantic space.",
"The matched neural representations can be transferred to another datasets of the target task even though cognitive processing signals is not present.",
"The neural architecture of CogAlign is visualized in Figure 1. We will elaborate the components of model in the following subsections.",
"Word Embeddings.",
"For a given word x i from the dataset of a target NLP task (e.g., NER), we obtain the vector representation h wordi by looking up a pre-trained embedding matrix.",
"The obtained word embeddings are fixed during training.",
"For NER, previous studies have shown that character-level features can improve the performance of sequence labeling (Lin et al., 2018).",
"We therefore apply a character-level CNN framework (Chiu and Nichols, 2016; Ma and Hovy, 2016) to capture the character-level embedding.",
"The word representation of word x i in NER task is the concatenation of word embedding and character-level embedding.",
"Cognitive Processing Signals.",
"For cognitive inputs, we can obtain word-level eye-tracking and EEG via data preprocessing (see details in Section 5.1).",
"Thus, for each word x i , we employ two cognitive processing signals h eyei and h eegi .",
"The cognitive input h cogi can be either a single type of signal or a concatenation of different cognitive processing signals.",
"As not all information contained in cognitive processing signals is useful for the target NLP task, we propose a text-aware attention mechanism to assign text sensitive weights to cognitive processing signals.",
"The main process of attention mechanism consists of learning a compatibility matrix between word embeddings H word R d w N and cognitive representations H cog R d c N from the input layer and preforming cognitive-wise max-pooling operation over the matrix.",
"where d w and d c are the dimension of word embeddings and cognitive representations, respectively, N is the length of the input, and U RN N is a trainable parameter matrix.",
"We then obtain a vector g cog R d c , which is computed as the importance score for each element in the cognitive processing signals with regard to the word embeddings, by row-wise max-pooling over G .",
"Finally, we compute attention weights and the text-aware representation of cognitive processing signals H cog (cid:48) as follows: cog = softmax( g cog ) (2) H cog (cid:48) = cog H cog (3) 3.3 Encoder Layer We adopt Bi-LSTMs to encode both cognitive and textual inputs following previous works (Hollen-stein and Zhang, 2019; Hollenstein et al., 2019a).",
"In this work, we employ two private Bi-LSTMs and one shared Bi-LSTM as shown in Figure 1, where private Bi-LSTMs are used to encode cognitive and textual inputs respectively and the shared Bi-LSTM is used for learning shared semantics of both types of inputs.",
"We concatenate the outputs of private Bi-LSTMs and shared Bi-LSTM as input to the task-specific predictors of subsequent NLP tasks.",
"The hidden states of the shared Bi-LSTM are also fed into the discriminator.",
"We alternatively feed cognitive and textual inputs into the shared Bi-LSTM encoder.",
"Our goal is that the shared encoder is able to map the representations of the two different sources of inputs into the same semantic space so as to learn the deep commonalities of two modalities (cognitive and textual).",
"For this, we use a self-supervised discriminator to provide supervision for training the shared encoder.",
"Particularly, the discriminator is acted as a clas-sifier to categorize the alternatively fed inputs into either the textual or cognitive input.",
"For the hidden state of modality k , we use a self-attention mechanism to first reduce the dimension of the output of the shared Bi-LSTM H sk R d h N : = softmax( v T tanh( W s H sk + b s )) (4) h sk = N (cid:88) i =1 i H sk i (5) where W s R d h d h , b s R d h , v R d h are trainable parameters in the model, h sk is the output of self-attention mechanism.",
"Then we predict the category of the input by softmax function: D ( h sk ) = softmax( W d h sk + b d ) (6) where D ( h sk ) is the probability that the shared encoder is encoding an input with modality k .",
"Given a sample X , the final cognitively augmented representation after the encoder layer can be formulated as H (cid:48) = [ H p ; H s ] R 2 d h N .",
"H p and H s are the result of private Bi-LSTM and shared Bi-LSTM, respectively.",
"For sequence labeling tasks like NER, we employ the conditional random field (CRF) (Lafferty et al., 2001) as the predictor as Bi-LSTM-CRF is widely used in many sequence labeling tasks (Ma and Hovy, 2016; Luo et al., 2018) due to the excellent performance and also in cognitively inspired NLP (Hollenstein and Zhang, 2019; Hollenstein et al., 2019a).",
"Firstly, we project the feature representation H (cid:48) onto another space of which dimension is equal to the number of NER tags as follows: o i = W n h (cid:48) i + b n (7) We then compute the score of a predicted tag sequence y for the given sample X : score ( X, y ) = N (cid:88) i =1 ( o i,y i + T y i 1 ,y i ) (8) where T is a transition score matrix which defines the transition probability of two successive labels.",
"Sentiment analysis and relation extraction can be regarded as multi-class classification tasks, with 3 and 11 classes, respectively.",
"For these two tasks, we use a self attention mechanism to reduce the dimension of H (cid:48) and obtain the probability of a predicted class via the softmax function.",
"In order to learn the deep interaction between cognitive and textual modalities in the same semantic space, we want the shared Bi-LSTM encoder to output representations that can fool the discriminator.",
"Therefore we adopt the adversarial learning strategy.",
"Particularly, the shared encoder acts as the generator that tries to align the textual and cognitive modalities as close as possible so as to mislead the discriminator.",
"The shared encoder and discriminator works in an adversarial way.",
"Additionally, to further increase the difficulty for the discriminator to distinguish modalities, we add a gradient reversal layer (GRL) (Ganin and Lempitsky, 2015) in between the encoder layer and predictor layer.",
"The gradient reversal layer does nothing in the forward pass but reverses the gradients and passes them to the preceding layer during the backward pass.",
"That is, gradients with respect to the adversarial loss L Adv are replaced with L Adv after going through GRL.",
"CogAlign is established on a multi-task learning framework, where the final training objective is composed of the adversarial loss L Adv and the loss of the target task L Task .",
"For NER, we exploit the negative log-likelihood objective as the loss function.",
"Given T training examples ( X i ; y i ) 1 , L Task is defined as follows: L Task = T (cid:88) i =1 log p ( y i | X i ) (9) where y denotes the ground-truth tag sequence.",
"The probability of y is computed by the softmax function: p ( y | X ) = e score ( X,y ) (cid:80) (cid:101) y Y e score ( X, (cid:101) y ) (10) For sentiment analysis and relation extraction tasks, the task objective is similar to that of NER.",
"The only difference is that the label of the task is changed from a tag sequence to a single class.",
"The adversarial loss L Adv is defined as: L Adv = min s (max d K (cid:88) k =1 T k (cid:88) i =1 log D ( S ( X ik ))) (11) 1 X can be either textual or cognitive input as we alternatively feed word embeddings and cognitive processing signals into CogAlign.",
"where s and d denote the parameters of the shared Bi-LSTM encoders S and modality discriminator D , respectively, X ik is the representation of sentence i in a modality k .",
"The joint loss of CogAlign is therefore defined as: L = L Task + L Adv (12) 4.3 Inference After training, the shared encoder learns a unified semantic space for representations of both cognitive and textual modality.",
"We believe that the shared space embeds knowledge from cognitive processing signals.",
"For inference, we therefore only use the textual part and the shared encoder (components in the red dashed box in Figure 1).",
"The private encoder outputs textual-modality-only representations while the shared encoder generates cognitive-augmented representations.",
"The two representations are concatenated to feed into the predictor layer of the target task.",
"This indicates that we do not need cognitive processing signals for the inference of the target task.",
"It also means that we can pretrain CogAlign with cognitive processing signals and then transfer it to other datasets where cognitive processing signals are not available for the same target task.",
"We conducted experiments on three NLP tasks, namely NER, sentiment analysis and relation extraction with two types of cognitive processing signals (eye-tracking and EEG) to validate the effectiveness of the proposed CogAlign.",
"We chose a dataset 2 with multiple cognitive processing signals: Zurich Cognitive Language Processing Corpus (ZuCo) (Hollenstein et al., 2018).",
"This corpus contains simultaneous eye-tracking and EEG signals collected when 12 native English speakers are reading 1,100 English sentences.",
"Word-level signals can be divided by the duration of each word.",
"The dataset includes two reading paradigms: normal reading and task-specific reading where subjects exercise some specific task.",
"In this work, we only used the data of normal reading, since this paradigm accords with human natural reading.",
"The materials for normal reading paradigm 2 The data is available here: https://osf.io/q3zws/ EARLY first fixation duration (FFD) the duration of word w that is first fixated first pass duration (FPD) the sum of the fixations before eyes leave the word w LATE number of fixations (NFIX) the number of times word w that is fixated fixation probability (FP) the probability that word w is fixated mean fixation duration (MFD) the average fixation durations for word w total fixation duration (TFD) the total duration of word w that is fixated n re-fixations (NR) the number of times word w that is fixated after the first fixation re-read probability (RRP) the probability of word w that is fixated more than once CONTEXT total regression-from duration (TRD) the total duration of regressions from word w w -2 fixation probability ( w -2 FP) the fixation probability of the word w -2 w -1 fixation probability ( w -1 FP) the fixation probability of the word w -1 w +1 fixation probability ( w +1 FP) the fixation probability of the word w +1 w +2 fixation probability ( w +2 FP) the fixation probability of the word w +2 w -2 fixation duration ( w -2 FD) the fixation duration of the word w -2 w -1 fixation duration ( w -1 FD) the fixation duration of the word w -1 w +1 fixation duration ( w +1 FD) the fixation duration of the word w +1 w +2 fixation duration ( w +2 FD) the fixation duration of the word w +2 Table 1: Eye-tracking features used in the NER task.",
"consist of two datasets: 400 movie reviews from Stanford Sentiment Treebank (Socher et al., 2013) with manually annotated sentiment labels, including 123 neutral, 137 negative and 140 positive sentences; 300 paragraphs about famous people from Wikipedia relation extraction corpus (Culotta et al., 2006) labeled with 11 relationship types, such as award, education.",
"We also tested our model on NER task.",
"For NER, the selected 700 sentences in the above two tasks are annotated with three types of entities: PERSON, ORGANIZATION, and LOCATION.",
"All annotated datasets 3 are publicly available.",
"The cognitive processing signals and textual features used for each task in this work are the same as (Hollenstein et al., 2019a).",
"Eye-tracking Features.",
"Eye-tracking signals record human gaze behavior while reading.",
"The eye-tracking data of ZuCo are collected by an infrared video-based eye tracker EyeLink 1000 Plus with a sampling rate of 500 Hz.",
"For NER, we used 17 eye-tracking features that cover all stages of gaze behaviors and the effect of context.",
"According to the reading process, these features are divided into three groups: EARLY , the gaze behavior when a word is fixated for the first time; LATE , the gaze behavior over a word that is fixated many times; CONTEXT , the eye-tracking features over neighboring words of the current word.",
"The 17 eye-tracking features used in the NER task are shown in the Table 1. In the other two tasks, we employed 5 gaze behaviors, including the first fixation duration (FFD), the number of fixations (NFIX), the total fixation duration (TFD), the first pass duration 3 https://github.com/DS3Lab/zuco-nlp/ (FPD), the gaze duration (GD) that is the duration of the first time eyes move to the current word until eyes leave the word.",
"EEG Features.",
"EEG signals record the brain's electrical activity in the cerebral cortex by placing electrodes on the scalp of the subject.",
"In the datasets we used, EEG signals are recorded by a 128-channel EEG Geodesic Hydrocel system (Elec-trical Geodesics, Eugene, Oregon) at a sampling rate of 500 Hz with a bandpass of 0.1 to 100 Hz.",
"The original EEG signals recorded are of 128 dimensions.",
"Among them, 23 EEG signals are removed during preprocessing since they are not related to the cognitive processing (Hollenstein et al., 2018).",
"After preprocessing, we obtained 105 EEG signals.",
"The left EEG signals are divided into 8 frequency bands by the frequency of brain's electrical signals: theta 1 (t1, 4-6 Hz), theta 2 (t2, 6.5-8 Hz), alpha 1 (a1, 8.5-10 Hz), alpha 2 (a2, 10.5-13 Hz), beta 1 (b1, 13.5-18 Hz), beta 2 (b2, 18.5-30 Hz), gamma 1 (g1, 30.5-40 Hz) and gamma 2 (g2, 40-49.5 Hz).",
"The frequency bands reflects the different functions of brain cognitive processing.",
"For NER, we used 8 EEG features that are obtained by averaging the 105 EEG signals at each frequency band.",
"For the other two tasks, EEG features were obtained by averaging the 105 signals over all frequency bands.",
"All used EEG features are obtained by averaging over all subjects and normalization.",
"We evaluated three NLP tasks in terms of precision, recall and F1 in our experiments.",
"Word embeddings of all NLP tasks were initialized with the publicly available pretrained GloVe (Pennington Signals Model NER Sentiment Analysis Relation Extraction P (%) R (%) F1 (%) P (%) R (%) F1 (%) P (%) R (%) F1 (%) Base 89.34 78.60 83.48 59.47 59.42 58.27 79.52 75.67 75.25 eye (Hollenstein et al., 2019a) 86.2 84.3 85.1 65.1 61.9 62.0 61.4 61.7 61.5 Base 90.56 81.05 85.43 64.26 61.96 61.19 82.01 78.23 77.95 Base+TA 90.75 81.77 85.93 64.63 62.71 61.41 83.26 76.47 78.04 CogAlign 90.76 82.52 86.41 62.86 64.10 62.30 78.33 82.06 78.56 EEG (Hollenstein et al., 2019a) 86.7 81.5 83.9 68.3 64.8 65.1 60.5 60.2 60.3 Base 89.82 80.55 84.76 64.09 60.29 59.79 82.79 77.16 77.61 Base+TA 89.54 82.22 85.62 62.20 62.19 60.91 80.83 78.46 77.81 CogAlign 89.87 83.08 86.21 63.11 65.38 62.81 77.94 82.60 78.66 eye +EEG (Hollenstein et al., 2019a) 85.1 83.2 84.0 66.3 59.3 60.8 59.8 60.0 59.8 Base 89.70 81.11 85.11 62.86 61.49 60.84 79.00 76.52 77.72 Base+TA 90.75 82.94 86.31 65.22 63.88 63.23 82.24 77.53 78.12 CogAlign 91.28 83.02 86.79 65.11 65.94 65.40 78.66 82.07 78.93 Table 2: Results of CogAlign and other methods on the three NLP tasks augmented with eye-tracking features (eye), EEG features (EEG), and both (eye+EEG).",
"et al., 2014) vectors of 300 dimensions.",
"For NER, we used 30-dimensional randomly initialized character embeddings.",
"We set the dimension of hidden states of LSTM to 50 for both the private Bi-LSTM and shared Bi-LSTM.",
"We performed 10-fold cross validation for NER and sentiment analysis and 5-fold cross validation for relation extraction.",
"We compared our model with previous state-of-the-art methods on ZuCo dataset.",
"The method by Hollenstein et al. (2019a) incorporates cognitive processing signals into their model via direct concatenation mentioned before.",
"Results of CogAlign on the three NLP tasks are shown in Table 2. From the table, we observe that:",
"By just simply concatenating word embeddings with cognitive processing signals, the Base model is better than the model without using any cognitive processing signals, indicating that cognitive processing signals (either eye-tracking or EEG signals) can improve all three NLP tasks.",
"Notably, the improvements gained by eye-tracking features are larger than those obtained by EEG signals while the combination of both does not improve over only using one of them.",
"We conjecture that this may be due to the low signal-to-noise ratio of EEG signals, which further decreases when two signals are combined together.",
"Compared with the Base model, the Base+TA achieves better results on all NLP tasks.",
"The text-aware attention gains an absolute improvement of 0.88, 2.04, 0.17 F1 on NER, sentiment analysis, and relation extraction, respectively.",
"With Base+TA, the best results for most tasks are obtained by the combination of eye-tracking and EEG signals.",
"This suggests that the proposed text-aware attention may have alleviated the noise problem of cognitive processing signals.",
"The proposed CogAlign achieves the highest F1 over all three tasks, with improvements of 0.48, 2.17 and 0.87 F1 over Base+TA on NER, sentiment analysis and relation extraction, respectively, which demonstrates the effectiveness of our proposed model.",
"In addition, CogAlign with both cognitive processing signals obtains new state-of-the-art performance in all NLP tasks.",
"This suggests that CogAlign is able to effectively augment neural models with cognitive processing signals.",
"To take a deep look into the improvements contributed by each part of our model, we perform ablation study on all three NLP tasks with two cognitive processing signals.",
"The ablation test includes: (1) w/o text-aware attention , removing text-aware attention mechanism; (2) w/o cognitive loss , discarding the loss of the cognitive predictor whose inputs are cognitive processing signals; (3) w/o modality discriminator , removing the discriminator to train parameters with the task loss.",
"Table 3 reports the ablation study results.",
"The absence of the text-aware attention, cognitive loss and modality discriminator results in a significant drop in performance.",
"This demonstrates that these components all contribute to the effective incorporation of cognitive processing signals into neural models of the three target tasks.",
"CogAlign outperforms both (2) w/o cognitive loss and (3) w/o modality discriminator by a great margin, indicating that the cognitive features can significantly enhance neural models.",
"Furthermore, we visualize the distribution of hidden states learned by the shared Bi-LSTM to give a more intuitive demonstration of the effect of adversarial learning.",
"In Figure 2, clearly, the modality discriminator with adversarial learning forces the shared Bi-LSTM encoder to align textual inputs to cognitive processing signals in the same space.",
"In addition to denoising the cognitive processing signals, the text-aware attention mechanism also obtains the task-specific features.",
"To have a clear view of the role that the text-aware attention mechanism plays in CogAlign, we randomly choose samples and visualize the average attention weights over each signal in Figure 3. For eye-tracking, signals reflecting the late",
"syn-(a) eye-tracking",
"tactic processing, such as NFIX' (number of fix-ation), TFD' (total fixation duration), play an important role in the three tasks.",
"These results are consistent with findings in cognitive neuroscience.",
"In cognitive neuroscience, researchers have shown that readers tend to gaze at nouns repeatedly (Furt-ner et al., 2009) (related to the eye-tracking signal NFIX, the number of fixations) and there is a dependency relationship between regression features and sentence syntactic structures (Lopopolo et al., 2019).",
"In other NLP tasks that infused eye-tracking features, the late gaze features have also proved to be more important than early gaze features, such as multiword expression extraction (Rohanian et al., 2017).",
"Moreover, from the additional eye-tracking used in NER, we can find that the cognitive features from the neighboring words are helpful to identify entity, such as w -2 FP' ( w -2 fixation probability), w +1 FP' ( w +1 fixation probability).",
"Since a single EEG signal has no practical meaning, we only visualize the attention weights over EEG signals used in the NER task.",
"Obviously, attentions to t1' ( theta 1 ) and a2' ( alpha 2 ) are stronger than other signals, suggesting that low frequency electric activities in the brain are obvious when we recognize an entity.",
"The cognitively-inspired NLP is limited by the collection of cognitive processing signals.",
"Thus, we further investigate whether our model can transfer cognitive features to other datasets without cognitive processing signals for the same task.",
"We enable transfer learning in CogAlign with a method similar to the alternating training approach (Luong et al., 2016) that optimizes each task for a fixed number of mini-batches before shifting to the next task.",
"In our case, we alternately feed instances from the ZuCo dataset and those from other datasets built for the same target task but without cognitive processing signals into CogAlign.",
"Since CogAlign is a multi-task learning framework, model parameters can be updated either by data with cognitive processing signals or by data without such signals, where task-specific loss is used in both situations.",
"Please notice that only textual inputs are fed into trained CogAlign for inference.",
"To evaluate the capacity of CogAlign in transferring cognitive features, we select benchmark datasets for NER and sentiment analysis: Wikigold (Balasuriya et al., 2009) and Stanford Sentiment Treebank (Socher et al., 2013).",
"Since no other datasets use the same set of relation types as that in ZuCo dataset, we do not test the relation extraction task for transfer learning.",
"To ensure that the same textual data are used for comparison, we add a new baseline model (baseline (+Zuco text)) that is trained on the combination of textual data in ZuCo and benchmark dataset.",
"Additionally, as CogAlign uses two encoders for inference (i.e., the textual encoder and shared encoder), for a fair comparison, we setup another baseline (baseline (two encoders)) that also uses two encoders fed with the same textual inputs.",
"The experimental setup is the same as mentioned before.",
"two baselines.",
"It indicates that CogAlign is able to effectively transfer cognitive knowledge (either eye-tracking or EEG) from ZuCo to other datasets.",
"Results show that the best performance is achieved by transferring both eye-tracking and EEG signals at the same time.",
"In this paper, we have presented CogAlign, a framework that can effectively fuse cognitive processing signals into neural models of various NLP tasks by learning to align the textual and cognitive modality at both word and sentence level.",
"Experiments demonstrate that CogAlign achieves new state-of-the-art results on three NLP tasks on the Zuco dataset.",
"Analyses suggest that the text-aware attention in CogAlign can learn task-related cognitive processing signals by attention weights while the modality discriminator with adversarial learning forces CogAlign to learn cognitive and textual representations in the unified space.",
"Further experiments exhibit that CogAlign is able to transfer cognitive information from Zuco to other datasets without cognitive processing signals.",
"The present research was partially supported by the National Key Research and Development Program of China (Grant No. 2019QY1802) and Natural Science Foundation of Tianjin (Grant No. 19JCZDJC31400).",
"We would like to thank the anonymous reviewers for their insightful comments."
] | [
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Although Transformer has achieved great successes on many NLP tasks, its heavy structure with fully-connected attention connections leads to dependencies on large training data.",
"In this paper, we present Star-Transformer , a lightweight alternative by careful sparsification.",
"To reduce model complexity, we replace the fully-connected structure with a star-shaped topology, in which every two non-adjacent nodes are connected through a shared relay node.",
"Thus, complexity is reduced from quadratic to linear, while preserving the capacity to capture both local composition and long-range dependency.",
"The experiments on four tasks (22 datasets) show that Star-Transformer achieved significant improvements against the standard Transformer for the modestly sized datasets.",
"Recently, the fully-connected attention-based models, like Transformer (Vaswani et al., 2017), become popular in natural language processing (NLP) applications, notably machine translation (Vaswani et al., 2017) and language modeling (Radford et al., 2018).",
"Some recent work also suggest that Transformer can be an alternative to recurrent neural networks (RNNs) and convolutional neural networks (CNNs) in many NLP tasks, such as GPT (Radford et al., 2018), BERT (Devlin et al., 2018), Transformer-XL (Dai et al., 2019) and Universal Transformer (Dehghani et al., 2018).",
"More specifically, there are two limitations of the Transformer.",
"First, the computation and memCorresponding Author.",
"ory overhead of the Transformer are quadratic to the sequence length.",
"This is especially problematic with long sentences.",
"Transformer-XL (Dai et al., 2019) provides a solution which achieves the acceleration and performance improvement, but it is specifically designed for the language modeling task.",
"Second, studies indicate that Transformer would fail on many tasks if the training data is limited, unless it is pre-trained on a large corpus.",
"(Radford et al., 2018; Devlin et al., 2018).",
"A key observation is that Transformer does not exploit prior knowledge well.",
"For example, the local compositionality is already a robust inductive bias for modeling the text sequence.",
"However, the Transformer learns this bias from scratch, along with non-local compositionality, thereby increasing the learning cost.",
"The key insight is then whether leveraging strong prior knowledge can help to lighten up the architecture.",
"architecture by moving the fully-connected topology into a star-shaped structure.",
"Fig-1 gives an overview.",
"Star-Transformer has two kinds of connections.",
"The radical connections preserve the non-local communication and remove the redundancy in fully-connected network.",
"The ring connections embody the local-compositionality prior, which has the same role as in CNNs/RNNs.",
"The direct outcome of our design is the improvement of both efficiency and learning cost: the computation cost is reduced from quadratic to linear as a function of input sequence length.",
"An inherent advantage is that the ring connections can effectively reduce the burden of the unbias learning of local and non-local compositionality and improve the generalization ability of the model.",
"What remains to be tested is whether one shared relay node is capable of capturing the long-range dependencies.",
"We evaluate the Star-Transformer on three NLP tasks including Text Classification, Natural Language Inference, and Sequence Labelling.",
"Experimental results show that Star-Transformer outperforms the standard Transformer consistently and has less computation complexity.",
"An additional analysis on a simulation task indicates that Star-Transformer preserve the ability to handle with long-range dependencies which is a crucial feature of the standard Transformer.",
"In this paper, we claim three contributions as the following and our code is available on Github 1 : Compared to the standard Transformer, Star-Transformer has a lightweight structure but with an approximate ability to model the long-range dependencies.",
"It reduces the number of connections from n 2 to 2 n , where n is the sequence length.",
"The Star-Transformer divides the labor of semantic compositions between the radical and the ring connections.",
"The radical connections focus on the non-local compositions and the ring connections focus on the local composition.",
"Therefore, Star-Transformer works for modestly sized datasets and does not rely on heavy pre-training.",
"we verify that both Transformer and Star-Transformer are good at handling long-range dependencies compared to the LSTM and BiLSTM.",
"Recently, neural networks have proved very successful in learning text representation and have achieved state-of-the-art results in many different tasks.",
"Modelling Local Compositionality A popular approach is to represent each word as a low-dimensional vector and then learn the local semantic composition functions over the given sentence structures.",
"For example, Kim (2014); Kalchbren-ner et al. (2014) used CNNs to capture the semantic representation of sentences, whereas Cho et al. (2014) used RNNs.",
"These methods are biased for learning local compositional functions and are hard to capture the long-term dependencies in a text sequence.",
"In order to augment the ability to model the nonlocal compositionality, a class of improved methods utilizes various self-attention mechanisms to aggregate the weighted information of each word, which can be used to get sentence-level representations for classification tasks (Yang et al., 2016; Lin et al., 2017; Shen et al., 2018a).",
"Another class of improved methods augments neural networks with a re-reading ability or global state while processing each word (Cheng et al., 2016; Zhang et al., 2018).",
"One class of models incorporate syntactic tree into the network structure for learning sentence representations (Tai et al., 2015; Zhu et al., 2015).",
"Another type of models learns the dependencies between words based entirely on self-attention without any recurrent or convolutional layers, such as Transformer (Vaswani et al., 2017), which has achieved state-of-the-art results on a machine translation task.",
"The success of Transformer has raised a large body of follow-up work.",
"Therefore, some Transformer variations are also proposed, such as GPT (Radford et al., 2018), BERT (Devlin et al., 2018), Transformer-XL (Dai et al., 2019) , Universal Transformer (Dehghani et al., 2018) and CN 3 (Liu et al., 2018a).",
"However, those Transformer-based methods usually require a large training corpus.",
"When applying them on modestly sized datasets, we need the help of semi-supervised learning and unsupervised pretraining techniques (Radford et al., 2018).",
"Graph Neural Networks Star-Transformer is also inspired by the recent graph networks (Gilmer et al., 2017; Kipf and Welling, 2016; Battaglia et al., 2018; Liu et al., 2018b), in which the information fusion progresses via message-passing across the whole graph.",
"The graph structure of the Star-Transformer is star-shaped by introducing a virtual relay node.",
"The radical and ring connections give a better bal-ance between the local and non-local compositionality.",
"Compared to the previous augmented models (Yang et al., 2016; Lin et al., 2017; Shen et al., 2018a; Cheng et al., 2016; Zhang et al., 2018), the implementation of Star-Transform is purely based on the attention mechanism similar to the standard Transformer, which is simpler and well suited for parallel computation.",
"Due to its better parallel capacity and lower complexity, the Star-Transformer is faster than RNNs or Transformer, especially on modeling long sequences.",
"The Star-Transformer consists of one relay node and n satellite nodes.",
"The state of i -th satellite node represents the features of the i -th token in a text sequence.",
"The relay node acts as a virtual hub to gather and scatter information from and to all the satellite nodes.",
"Star-Transformer has a star-shaped structure, with two kinds of connections in the: the radical connections and the ring connections.",
"Radical Connections For a network of n satellite nodes, there are n radical connections.",
"Each connection links a satellite node to the shared relay node.",
"With the radical connections, every two non-adjacent satellite nodes are two-hop neighbors and can receive non-local information with a two-step update.",
"Ring Connections Since text input is a sequence, we bake such prior as an inductive bias.",
"Therefore, we connect the adjacent satellite nodes to capture the relationship of local compositions.",
"The first and last nodes are also connected.",
"Thus, all these local connections constitute a ring-shaped structure.",
"Note that the ring connections allow each satellite node to gather information from its neighbors and plays the same role to CNNs or bidirectional RNNs.",
"With the radical and ring connections, Star-Transformer can capture both the non-local and local compositions simultaneously.",
"Different from the standard Transformer, we make a division of labor, where the radical connections capture nonlocal compositions, whereas the ring connections attend to local compositions.",
"The implementation of the Star-Transformer is very similar to the standard Transformer, in which the information exchange is based on the attention mechanism (Vaswani et al., 2017).",
"Multi-head Attention Just as in the standard Transformer, we use the scaled dot-product attention (Vaswani et al., 2017).",
"Given a sequence of vectors H R n d , we can use a query vector q R 1 d to soft select the relevant information with attention.",
"where K = HWK , V = HWV , and WK , WV",
"are learnable parameters.",
"To gather more useful information from H , similar to multi-channels in CNNs, we can use multihead attention with k heads.",
"MultiAtt( q , H ) = ( a 1 a k ) WO , (2) a i = Att( qW Q i , HWK i , HWV i ) , i [1 , k ] (3) where denotes the concatenation operation, and W Qi , W Ki , W Vi , WO are learnable parameters.",
"Update Let s t R 1 d and H t R n d denote the states for the relay node and all the n satellite nodes at step t .",
"When using the Star-Transformer to encode a text sequence of length n , we start from its embedding E = [ e 1 ; ; e n ] , where e i R 1 d is the embedding of the i-th token.",
"We initialize the state with H 0 = E and s 0 = average ( E ) .",
"The update of the Star-Transformer at step t can be divided into two alternative phases: (1) the update of the satellite nodes and (2) the update of the relay node.",
"At the first phase, the state of each satellite node h i are updated from its adjacent nodes, including the neighbor nodes h i 1 , h i +1 in the sequence, the relay node s t , its previous state, and its corresponding token embedding.",
"where C ti denotes the context information for the i -th satellite node.",
"Thus, the update of each satellite node is similar to the recurrent network, except that the update fashion is based on attention mechanism.",
"After the information exchange, a layer normalization operation (Ba et al., 2016) is used.",
"At the second phase, the relay node s t summarizes the information of all the satellite nodes and its previous state.",
"By alternatively updating update the satellite and relay nodes, the Star-Transformer finally captures all the local and non-local compositions for an input text sequence.",
"Position Embeddings To incorporate the sequence information, we also add the learnable position embeddings, which are added with the token embeddings at the first layer.",
"The overall update algorithm of the Star-Transformer is shown in the Alg-1.",
"After T rounds of update, the final states of HT and s T can be used for various tasks such as sequence labeling and classification.",
"For different tasks, we feed them to different task-specific modules.",
"For classification, we generate the fix-length sentence-level vector representation by applying a max-pooling across the final layer and mixing it with s T , this vector is fed into a Multiple Layer Perceptron (MLP) classifier.",
"For the sequence labeling task, the HT provides features corresponding to all the input tokens.",
"Since our goal is making the Transformer lightweight and easy to train with modestly sized dataset, we have removed many connections compared with the standard Transformer (see Fig-1).",
"If the sequence length is n and the dimension of hidden states is d , the computation complexity of one layer in the standard Transformer is O ( n 2 d ) .",
"The Star-Transformer has two phases, the update of ring connections costs O (5 nd ) (the constant 5 comes from the size of context information C ), and the update of radical connections costs O ( nd ) , so the total cost of one layer in the Star-Transformer is O (6 nd ) .",
"In theory, Star-Transformer can cover all the possible relationships in the standard Transformer.",
"For example, any relationship h i h j in the standard Transformer can be simulated by h i s h j .",
"The experiment on the simulation task in Sec-5.1 provides some evidence to show the virtual node s could handle long-range dependencies.",
"Following this aspect, we can give a rough analysis of the path length of dependencies in these models.",
"As discussed in the Transformer paper (Vaswani et al., 2017), the maximum dependency path length of RNN and Transformer are O ( n ) , O (1) , respectively.",
"Star-Transformer can pass the message from one node to another node via the relay node so that the maximum dependency path length is also O (1) , with a constant two comparing to Transformer.",
"Compare with the standard Transformer, all positions are processed in parallel, pair-wise connec-Dataset Train Dev.",
"tions are replaced with a gather and dispatch mechanism.",
"As a result, we accelerate the Transformer 10 times on the simulation task and 4.5 times on real tasks.",
"The model also preserves the ability to handle long input sequences.",
"Besides the acceleration, the Star-Transformer achieves significant improvement on some modestly sized datasets.",
"We evaluate Star-Transformer on one simulation task to probe its behavior when challenged with long-range dependency problem, and three real tasks (Text Classification, Natural Language Inference, and Sequence Labelling).",
"All experiments are ran on a NVIDIA Titan X card.",
"Datasets used in this paper are listed in the Tab-1.",
"We use the Adam (Kingma and Ba, 2014) as our optimizer.",
"On the real task, we set the embedding size to 300 and initialized with GloVe (Pennington et al., 2014).",
"And the symbol Ours + Char means an additional character-level pre-trained embedding JMT (Hashimoto et al., 2017) is used.",
"Therefore, the total size of embedding should be 400 which as a result of the concatenation of GloVe and JMT.",
"We also fix the embedding layer of the Star-Transformer in all experiments.",
"Since semior unsupervised model is also a feasible solution to improve the model in a parallel direction, such as the ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018), we exclude these models in the comparison and focus on the relevant architectures.",
"In this section, we introduce a simulation task on the synthetic data to probe the efficiency and non-local/long-range dependencies of LSTM, Transformer, and the Star-Transformer.",
"As mentioned in (Vaswani et al., 2017), the maximum path length of long-range dependencies of LSTM and Transformer are O ( n ) and O (1) , where n is the sequence length.",
"The maximum dependency path length of Star-Transformer is O (1) with a constant two via the relay node.",
"To validate the ability to deal with long-range dependencies, we design a simulation task named Masked Summation.",
"The input of this task is a matrix X R n d , it has n columns and each column has d elements.",
"The first dimension indicates the mask value X i 0 { 0 , 1 } , 0 means the column is ignored in summation.",
"The rest d 1 elements are real numbers in drawn uniformly from the range [0 , 1) .",
"The target is a d 1 dimensional vector which equals the summation of all the columns with the mask value 1 .",
"There is an implicit variable k to control the number of 1 in the input.",
"Note that a simple baseline is always guessing the value k/ 2 .",
"The evaluation metric is the Mean Square Error (MSE), and the generated dataset has (10k/10k/10k) samples in (train/dev/test) sets.",
"The Fig-2 show a case of the masked summation task.",
"The mask summation task asks the model to recognize the mask value and gather columns in different positions.",
"When the sequence length n is significantly higher than the number of the columns k , the model will face the long-range dependencies problem.",
"The Fig-3a shows the per-Input 1 0.3 0.4 0 0.5 0.7 0 0.1 0.2 1 0.5 0.9 0 0.4 0.2 0 0.6 0.8 0 0.1 0.3 1 0.1 0.6 0.9 1.9 Target Figure 2: An example of the masked summation task, the input is a sequence of n vectors, each vector has d dimension, and there are total k vectors which have the mask value equals 1 .",
"(a) MSE loss on the masked summation when n = 200 , k = 10 , d = 10 .",
"The k/ 2 line means the MSE loss when the model always guess the exception value k/ 2 .",
"formance curves of models on various lengths.",
"Although the task is easy, the performance of LSTM and BiLSTM dropped quickly when the sequence length increased.",
"However, both Transformer and Star-Transformer performed consistently on various lengths.",
"The result indicates the Star-Transformer preserves the ability to deal with the non-local/long-range dependencies.",
"Besides the performance comparison, we also study the speed with this simulation task since we could ignore the affection of padding, masking, and data processing.",
"We also report the inference time in the Fig-3b, which shows that Transformer is faster than LSTM and BiLSTM a lot, and Star-Transformer is faster than Transformer, especially on the long sequence.",
"Text classification is a basic NLP task, and we select two datasets to observe the performance of our model in different conditions, Stanford Sentiment Treebank(SST) dataset (Socher et al., 2013) and MTL-16 (Liu et al., 2017) consists of 16 small datasets on various domains.",
"We truncate the sequence which its length higher than 256 to ensure the standard Transformer can run on a single GPU card.",
"For classification tasks, we use the state of the relay node s T plus the feature of max pooling on satellite nodes max( HT ) as the final representation and feed it into the softmax classifier.",
"The description of hyper-parameters is listed in Tab-1 and Appendix.",
"Results on SST and MTL-16 datasets are listed in Tab-2,3, respectively.",
"On the SST, the Star-Transformer achieves 2.5 points improvement against the standard Transformer and beat the most models.",
"Also, on the MTL-16, the Star-Transformer outperform the standard Transformer in all 16 datasets, the improvement of the average accuracy is 4.2.",
"The Star-Transformer also gets better results compared with existing works.",
"As we mentioned in the introduction, the standard Transformer requires large training set to reveal its power.",
"Our experiments show the Star-Dataset Acc (%) Test Time (ms) Len.",
"Transformer could work well on the small dataset which only has 1400 training samples.",
"Results of the time-consuming show the Star-Transformer could be 4.5 times fast than the standard Transformer on average.",
"Natural Language Inference (NLI) asks the model to identify the semantic relationship between a premise sentence and a corresponding hypothesis sentence.",
"In this paper, we use the Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) for evaluation.",
"Since we want to study how the model encodes the sentence as a vector representation, we set Star-Transformer as a sentence vector-based model and compared it with sentence vector-based models.",
"In this experiment, we follow the previous work (Bowman et al., 2016) to use concat ( r 1 , r 2 , (cid:107) r 1 r 2 (cid:107) , r 1 r 2 ) as the classification feature.",
"The r 1 , r 2 are representations of premise and hypothesis sentence, it is calculated by s T + max( HT ) which is same with the classification task.",
"See Appendix for the detail of hyper-parameters.",
"As shown in Tab-4, the Star-Transformer outperforms most typical baselines (DiSAN, SPINN) and achieves comparable results compared with the state-of-the-art model.",
"Notably, our model beats standard Transformer by a large margin, which is easy to overfit although we have made a careful hyper-parameters' searching for Transformer.",
"The SNLI dataset is not a small dataset in NLP area, so improving the generalization ability of the Transformer is a significant topic.",
"The best result in Tab-4 (Yoon et al., 2018) using a large network and fine-tuned hyper-parameters, they get the best result on SNLI but an undistinguished result on SST, see Tab-2.",
"To verify the ability of our model in sequence labeling, we choose two classic sequence labeling tasks: Part-of-Speech (POS) tagging and Named Entity Recognition (NER) task.",
"Three datasets are used as our benchmark: one POS tagging dataset from Penn Treebank (PTB) (Marcus et al., 1993), and two NER datasets from CoNLL2003 (Sang and Meulder, 2003), CoNLL2012 (Pradhan et al., 2012).",
"We use the fi-Model Adv Tech POS NER PTB CoNLL2003 CoNLL2012 char CRF Acc F1 F1 (Ling et al., 2015) (cid:88) (cid:88) 97.78 -(Collobert et al., 2011) (cid:88) (cid:88) 97.29 89.59 (Huang et al., 2015) (cid:88) (cid:88) 97.55 90.10 (Chiu and Nichols, 2016a) (cid:88) (cid:88) -90.69 86.35 (Ma and Hovy, 2016) (cid:88) (cid:88) 97.55 91.06 (Nguyen et al., 2016) (cid:88) (cid:88) -91.2 (Chiu and Nichols, 2016b) (cid:88) (cid:88) -91.62 86.28 (Zhang et al., 2018) (cid:88) (cid:88) 97.55 91.57 (Akhundov et al., 2018) (cid:88) (cid:88) 97.43 91.11 87.84 Transformer 96.31 86.48 83.57 Transformer + Char (cid:88) 97.04 88.26 85.14 Star-Transformer 97.14 90.93 86.30 Star-Transformer + Char (cid:88) 97.64 91.89 87.64 Star-Transformer + Char + CRF (cid:88) (cid:88) 97.68 91.98 87.88 Table 5: Results on sequence labeling tasks.",
"nal state of satellite nodes HT to classify the label in each position.",
"Since we believe that the complex neural network could be an alternative of the CRF, we also report the result without CRF layer.",
"As shown in Tab-5, Star-Transformer achieves the state-of-the-art performance on sequence labeling tasks.",
"The Star-Transformer + Char has already beat most of the competitors.",
"Star-Transformer could achieve such results without CRF, suggesting that the model has enough capability to capture the partial ability of the CRF.",
"The Star-Transformer also outperforms the standard Transformer on sequence labeling tasks with a significant gap.",
"In this section, we perform an ablation study to test the effectiveness of the radical and ring connections.",
"We test two variants of our models, the first variants",
"(a) remove the radical connections and only keep the ring connections.",
"Without the radical connections, the maximum path length of this variant becomes O ( n ) .",
"The second variant",
"(b) removes the ring connections and remains the radical connections.",
"Results in Tab-6 give some insights, the variant",
"(a) loses the ability to handle long-range dependencies, so it performs worse on both the simulation and real tasks.",
"However, the performance drops on SNLI and CoNLL03 is moderate since the remained ring connections still capture the local features.",
"The variant",
"(b) still works on the simulation task since the maximum path length stays unchanged.",
"Without the ring connections, it loses its performance heavily on real tasks.",
"Therefore, both the radical and ring connections are necessary to our model.",
"In this paper, we present Star-Transformer which reduce the computation complexity of the standard Transformer by carefully sparsifying the topology.",
"We compare the standard Transformer with other models on one toy dataset and 21 real datasets and find Star-Transformer outperforms the standard Transformer and achieves comparable results with state-of-the-art models.",
"This work verifies the ability of Star-Transformer by excluding the factor of unsupervised pre-training.",
"In the future work, we will investigate the ability of Star-Transformer by unsupervised pre-training on the large corpus.",
"Moreover, we also want to introduce more NLP prior knowledge into the model.",
"We would like to thank the anonymous reviewers for their valuable comments.",
"The research work is supported by Shanghai Municipal Science and Technology Commission (No. 17JC1404100 and 16JC1420401), National Key Research and Development Program of China (No. 2017YFB1002104), and National Natural Science Foundation of China (No. 61672162 and 61751201)."
] | [
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"objective",
"abstain",
"other",
"other"
] |
[
"Learning a shared dialog structure from a set of task-oriented dialogs is an important challenge in computational linguistics.",
"The learned dialog structure can shed light on how to analyze human dialogs, and more importantly contribute to the design and evaluation of dialog systems.",
"We propose to extract dialog structures using a modified VRNN model with discrete latent vectors.",
"Different from existing HMM-based models, our model is based on variational-autoencoder (VAE).",
"Such model is able to capture more dynamics in dialogs beyond the surface forms of the language.",
"We find that qualitatively, our method extracts meaningful dialog structure, and quantitatively, outperforms previous models on the ability to predict unseen data.",
"We further evaluate the model's effectiveness in a downstream task, the dialog system building task.",
"Experiments show that, by integrating the learned dialog structure into the reward function design, the model converges faster and to a better outcome in a reinforcement learning setting.",
"Human dialogs are like well-structured buildings, with words as the bricks, sentences as the floors, and topic transitions as the stairs connecting the whole building.",
"Therefore, discovering dialog structure is crucial for various areas in computational linguistics, such as dialog system building (Young, 2006), discourse analysis (Grosz and Sidner, 1986), and dialog summarization (Murray et al., 2005; Liu et al., 2010).",
"In domain specific tasks such as restaurant booking, it's common for people to follow a typical conversation flow.",
"Current dialog systems require human experts to design the dialog structure, which is time consuming and sometimes insufficient to satisfy various customer needs.",
"Therefore, it's of great importance to automatically discover dialog structure from existing human-human conversations and incorporate it into the dialog system design.",
"However, modeling human conversation is not easy for machines.",
"Some previous work rely on human annotations to learn dialog structures in supervised learning settings (Jurafsky, 1997).",
"But since human labeling is expensive and hard to obtain, such method is constrained by the small size of training examples, and by the limited number of application domains (Zhai and Williams, 2014).",
"Moreover, structure annotations on human conversation can be subjective, which makes it hard to reach inter-rater agreements.",
"Therefore, we propose an unsupervised method to infer the latent dialog structure since unsupervised methods do not require annotated dialog corpus.",
"Limited previous work has studied unsupervised methods to model the latent dialog structure.",
"Most of the previous methods use the hidden Markov model to capture the temporal dependency within human dialogs (Chotimongkol, 2008; Ritter et al., 2010; Zhai and Williams, 2014).",
"We propose to adopt a new type of models, the variational recurrent neural network (VRNN, a recurrent version of the VAE) (Chung et al., 2015), and infer the latent dialog structure with variational inference.",
"VRNN is suitable for modeling sequential information.",
"Compared to the simpler HMM models, VRNN also has the flexibility to model highly non-linear dynamics (Chung et al., 2015) in human dialogs.",
"Our basic approach assumes that the dialog structure is composed of a sequence of latent states.",
"Each conversational exchange (a pair of user and system utterances at time t ) belongs to a latent state , which has causal effect on the future latent states and the words the conversants produce.",
"Because discrete latent states are more interpretable than continuous ones, we combine VRNN with Gumbel-Softmax (Jang et al., 2016) to obtain discrete latent vectors to represent the latent states.",
"A common way to represent the dialog structure both visually and numerically is to construct a transition probability table among latent states.",
"The idea of transition table inspires us to develop two model variants to model the dependency between states indirectly and directly.",
"Once we obtain such a human-readable dialog structure, we can use it to facilitate many downstream tasks, such as dialog system training.",
"The motivation is that the dialog structure contains important information on the flow of the conversation; if the automatic dialog system can mimic the behaviour in human-human dialogs, it can interact with users in a more natural and user-friendly way.",
"Therefore, we propose to integrate the dialog structure information into the reward design of the reinforcement learning (RL).",
"Experiments show that the model with the proposed reward functions converges faster to a better success rate.",
"Variational Autoencoders (VAEs) (Kingma and Welling, 2013; Doersch, 2016; Kingma et al., 2014) have gained popularity in many computational linguistics tasks due to its interpretable generative model structure (Miao and Blunsom, 2016; Miao et al., 2017).",
"Zhao et al. (2018) applied VAE to learn discrete sentence representations and achieved good results.",
"This is similar to our work, but we focus more on modeling the dialog structure and using the learned structure to improve the dialog system training.",
"Serban et al. (2017) presented a VHRED model which combines the VRNN and encoder-decoder structure for direct dialog response generation.",
"While also similar, our model uses discrete latent vectors instead of continuous ones, and re-constructs the utterances to recover the latent dialog structure, instead of modeling the responses directly.",
"There are some previous studies on discovering latent structure of conversations (Chotimongkol, 2008; Ritter et al., 2010; Zhai and Williams, 2014).",
"But they are all based on Hidden Markov Model (HMM).",
"Ritter et al. (2010) extended the HMM-based method in Chotimongkol (2008) by adding additional word sources to social interaction data on Twitter.",
"Zhai and Williams (2014) decoupled the number of topics and the number of states to allow an additional layer of information in task-oriented dialogs.",
"Our work also focuses on task-oriented dialogs but adopts the VRNN to perform variational inference in the model.",
"According to Chung et al. (2015), the VRNN retains the flexibility to model highly non-linear dynamics, compared to simpler Dynamic Bayesian Network models such as HMM.",
"Gunasekara et al. (2017) described a Quantized-Dialog Language Model (QDLM) for task-oriented dialog systems, which performs clustering on utterances and models the dialog as a sequence of clusters to predict future responses.",
"The idea of dialog discretization is similar to our method, but we choose VAE over simple clustering to allow more context-sensitivity and to capture more dynamics in the dialog beyond surface forms of the conversation.",
"Additionally, we propose to utilize the dialog structure information to improve the dialog system training.",
"Traditional reward functions in RL dialog training use delayed reward to provide feedback to the model.",
"However, delayed reward suffers from potential slow convergence rate problem, so some studies integrated estimated per-turn immediate reward.",
"For example, Ferreira and Lef`evre (2013) studied expert-based reward shaping in dialog management.",
"We use the KL-divergence between the transition probabilities and the predicted probabilities as the immediate per-turn reward.",
"Different from the expert-based reward shaping, such reward does not require any manual labels and is generalizable to different tasks.",
"Fig. 1 gives an overview of the Discrete-VRNN (D-VRNN) model and the Direct-Discrete-VRNN (DD-VRNN) model.",
"In principal, the VRNN contains a VAE at every timestep, and these VAEs are connected by a state-level RNN.",
"The hidden state variable h t-1 in this RNN encodes the dialog context up to time t .",
"This connection helps the VRNN to model the temporal structure of the dialog (Chung et al., 2015).",
"The observed inputs x t to the model is the constructed utterance embeddings.",
"z t is the latent vector in the VRNN at time t .",
"Different from Chung et al. (2015), z t in our model is a discrete one-hot vector of dimension N , where N is the total number of latent states.",
"The major difference between D-VRNN and DD-VRNN lies in the priors of z t .",
"In D-VRNN, we assume that z t depends on the entire dialog z t h t x t h t-1 z t-1",
"context h t-1 , shown in red in Fig.",
"1(a), which is the same as in Chung et al. (2015); while in DD-VRNN we assume that in the prior, z t directly depends on z t-1 in order to model the direct transition between different latent states, shown in red in Fig.",
"1(b).",
"We use z t and h t-1 to regenerate the current utterances x t instead of generating the next utterances x t+1 , shown in blue dotted lines in Fig. 1.",
"The idea of regeneration helps recover the dialog structure.",
"Next, the recurrence in the RNN takes h t-1 , x t and z t to update itself, and allows the context to be passed down as the dialog proceeds, shown in gray dash-dotted lines.",
"Finally in the inference, we construct the posterior of z t with the context h t 1 and x t , and infer z t by sampling from the posterior, shown in black dashed lines in Fig. 1.",
"The mathematical details of each operation are described below.",
"( ) are highly flexible feature extraction functions such as neural networks.",
"x , z , prior , enc , dec are feature extraction networks for the input x , the latent vector z , the prior, the encoder and the decoder.",
"Sentence Embedding .",
"u t = [ w 1 , t , w 2 , t , ... w n w ,t ] and s t = [ v 1 ,t , v 2 ,t , ... v n v ,t ] are the user utterance and the system utterance at time t , where w i , j and v i , j are individual words.",
"The concatenation of utterances from both parties, x t = [ u t , s t ] , is the observed variable in the VAE.",
"We use Mikolov et al. (2013) to perform word embedding and the average of the word embedding vectors of u t and s t are u t and s t .",
"The concatenation of u t and s t is used as the feature extraction of x t , namely x ( x t ) = [ u t , s t ] .",
"x ( x t ) is the model inputs.",
"Prior in D-VRNN .",
"The prior quantifies our assumption on z t before we observe the data.",
"In the D-VRNN, it's reasonable to assume that the prior of z t depends on the context h t-1 and follows the distribution shown in Eq.",
"(1), because conversation context is a critical factor that influences dialog transitions.",
"Since z t is discrete, we use softmax to obtain the distribution.",
"Prior in DD-VRNN .",
"The dependency of z t on the entire context h t-1 in Eq.",
"(1) makes it difficult to disentangle the relation between z t-1 and z t .",
"But this relation is crucial in decoding how conversations flow from one conversational exchange to the next one.",
"So in DD-VRNN, we directly model the influence of z t-1 on z t in the prior, shown in Eq.",
"(2) and Fig.",
"1(b).",
"To fit this prior distribution into the variational inference framework, we approximate p ( z t | x <t , z <t ) with p ( z t | z t-1 ) in Eq.",
"(3).",
"Later, we show that the designed new prior has benefits under certain scenarios.",
"z t softmax ( prior ( z t-1 )) (2) p ( x T , z T ) = T (cid:89) t =1 p ( x t | z t , x t ) p ( z t | x <t , z <t ) T (cid:89) t =1 p ( x t | z t , x t ) p ( z t | z t-1 ) (3) Generation .",
"z t is a summarization of the current conversational exchange under the context.",
"We use z t and h t-1 to reconstruct the current utterance x t .",
"This regeneration of x t allows us to recover the dialog structure.",
"We use two RNN decoders, dec1 and dec2 , parameterized by 1 and 2 to generate the original u t and s t respectively.",
"c t and d t are the hidden states of dec1 and dec2 .",
"The context h t-1 and feature extraction vector z ( z t ) are concatenated to form the initial hidden state h dec1 0 of dec1 .",
"c ( n w ,t ) is the last hidden state of dec1 .",
"Since v t is the response of u t and will be affected by u t , we concatenate c ( n w ,t ) to d 0 to pass the information from u t to v t .",
"This concatenated vector is used as h dec2 0 of dec2 .",
"This process is shown in Eq.",
"(4) and (5).",
"c 0 = [ h t 1 , z ( z t )] , w ( i,t ) , c ( i,t ) = f 1 ( w ( i 1 ,t ) , c ( i 1 ,t ) ) (4) d 0 = [ h t 1 , z ( z t ) , c ( n w ,t ) ] , v ( i,t ) , d ( i,t ) = f 2 ( v ( i 1 ,t ) , d ( i 1 ,t ) ) (5) Recurrence .",
"The state-level RNN updates its hidden state h t with h t-1 based on the following Eq.",
"(6).",
"f is a RNN parameterized by .",
"Inference .",
"We infer z t from the context h t-1 and current utterances x t , and construct the posterior distribution of z t by another softmax, shown in Eq.",
"(7).",
"Once we have the posterior distribution, we apply Gumbel-Softmax to take samples of z t .",
"D-VRNN and DD-VRNN differ in their priors but not in their inference, because we assume the direct transitions between z t in the prior instead of in the inference.",
"Loss function .",
"The objective function of VRNN is a timestep-wise variational lower bound, shown in Eq.",
"(8) (Chung et al., 2015).",
"To mitigate the vanishing latent variable problem in VAE, we incorporate bow-loss and Batch Prior Regularization (BPR) (Zhao et al., 2017, 2018) with tunable weights, to the final loss function, shown in Eq.",
"(9).",
"LVRNN = E q ( z T | x T ) [log p ( x t | z t , x <t ))+ T (cid:88) t =1 -KL ( q ( z t | x t , z <t ) (cid:107) p ( z t | x <t , z <t ))] (8) LD-VRNN = LVRNN-BPR + L bow (9) 3.1 Transition Probability Calculation A good way to represent a dialog structure both numerically and visually is to construct a transition probability table among latent states.",
"Such transition probability can also be used to design reward function in the RL training process.",
"We calculate transition table differently for D-VRNN and DD-VRNN due to their different priors.",
"D-VRNN .",
"From Eq.",
"(6), we know that h t is a function of x t and z t .",
"Combining Eq.",
"(1) and (6), we find that z t is a function of x t and z <t .",
"Therefore, z <t has an indirect influence on z t through h t-1 .",
"This indirect influence reinforces our assumption that the previous states z <t impacts future state z t , but also makes it hard to recover a clear structure and disentangle the direct impact of z t-1 on z t .",
"In order to better visualize the dialog structure and compare with the HMM-based models, we quantify the impact of z t-1 on z t by estimating a bi-gram transition probability table, where p i,j = #( state i , state j ) #( state i ) .",
"The numerator is the total number of the ordered tuples (state i, t-1 , state j, t ) and the denominator is the total number of state i in the dataset.",
"We choose a bi-gram transition table over a n-gram transition table with a bigger n , as the most recent context is usually the most relevant, but it should be noted that unlike the HMM models, the degree of transition in our model is not limited nor pre-determined, because z t captures all the context.",
"Depending on different applications, different n may be selected.",
"DD-VRNN As stated before, the dependency of z t on the entire context h t-1 creates difficulty in calculating the transition table.",
"This is our motivation to derive the prior in DD-VRNN.",
"The outputs from the softmax in the prior (Eq.",
"(2)) directly constitute the transition table.",
"So rather than estimating the transition probabilities by frequency count as in D-VRNN, we can optimize the loss function of DD-VRNN and get the parameters in Eq.",
"(2) that directly form the transition table.",
"In task-oriented dialog systems, the presence of certain named entities, such as food preference plays a crucial role in determining the phase of the dialog.",
"To make sure the latent states capture such useful information, we assign larger weights on the named entities when calculating the loss function in Eq.",
"(9).",
"The weights encourage the reconstructed utterances to have more correct named entities, therefore influencing the latent state to have better representation.",
"We refer this model as NE-D-VRNN (Named Entitiy Discrete-VRNN).",
"We test the proposed method on the CamRest676 corpus, which was released and collected by Wen et al. (2016).",
"The task is to help users find restaurants in Cambridge, UK.",
"While this task is highly similar to DSTC2, we choose this dataset instead of DSTC2 because it is relatively clean and comes with good entity extraction methods.",
"There are a total of 676 dialogs in this dataset with three information slots ( food, price range and area ) and three request table slots ( address, phone and postcode ).",
"We also evaluate our model on another dataset of simulated conversations, proposed in Zhao and Eskenazi (2018).",
"The task is to help users get the weather report in a certain place at a specific time.",
"The dialog system is controlled by a fixed structure and hand-set probabilities.",
"Therefore, learning the dialog structure of this dataset might be easier.",
"We assume each latent vector in the VAE emits one conversational exchange, including one user utterance and the corresponding system response at time t , and each conversational exchange corresponds to one latent vector, following Zhai and Williams (2014).",
"We use LSTM (Hochreiter and Schmidhuber, 1997) with 200-400 units for the RNNs, and a fully-connected network for all the ( ) with a dropout rate of 0.4.",
"Additionally, we use trainable 300-dimension word embeddings initialized by Google word2vec (Mikolov et al., 2013).",
"The maximum utterance word length is 40 and the maximum dialog length is 10.",
"We set the for the bow-loss to be 0.1.",
"80% of the entire dataset are used for training, 10% for validation and 10% for testing.",
"Parameters mentioned are selected based on the performance of the validation set.",
"The evaluation of unsupervised methods has always been a challenge.",
"We first compare our models with a simple K-means clustering algorithm to show its context sensitivity.",
"Then we compare our models with traditional HMM methods both qualitatively and quantitatively.",
"Finally, we compare the three proposed model variants.",
"The qualitative evaluation involves generating dialog structures with different models, and the quantitative evaluations involves calculating the likelihood on a held-out test set under a specific model, which measures the model's predictive power.",
"We apply the model and obtain a latent state z t for each conversational exchange.",
"Since z t is a discrete one-hot vector, we can group the conversational exchanges with the same latent state together.",
"This process is similar to clustering the conversational exchanges.",
"But we choose the VAE over simple clustering methods because the VAE introduces more flexible context-sensitive information.",
"A straightforward clustering method like K-means usually groups sentences with similar surface forms together, unless the previous context is explicitly encoded along with the utterances.",
"To compare the grouping result of our model with a traditional clustering method, we perform K-means clustering on the dataset and calculate the within-cluster cosine similarity between the bag-of-word vectors of the utterances and the context.",
"This cosine similarity measures how similar the utterances are on the word token level, higher the value is, more words the utterances share in common.",
"It turns out the average cosine similarity between the current utterance is 0.536 using the K-means and 0.357 using the D-VRNN, while the average cosine similarity between the context is 0.320 using the K-means and 0.351 using the D-VRNN.",
"This does show that in the D-VRNN result, the context are more similar to each other, while in the K-means, the current utterances are more similar on the surface textual level,which is not ideal because the dialog system needs context information.",
"Table 1 shows an example where the D-VRNN clustering result is different from the K-means result.",
"The conversational exchanges to be clustered are in the last row.",
"These two exchanges have the same surface form but different contexts.",
"D-VRNN identifies them as different by incorporating context information, whereas K-means places them into the same cluster ignoring the context.",
"HMMHMM is similar to our model.",
"Actually, if we remove the h t layer from Fig. 1, it becomes an HMM.",
"But it is this additional layer of h t that encodes the dialog context into continuous information and is crucial in the success of our model.",
"In Fig. 4 and 5, we compare our models quantitatively with the TM-HMM model with 10 topics and 20 topics from Zhai and Williams (2014), which performs the best on a similar task-oriented dialog dataset, the DSTC1 .",
"The y-axis shows the negative log likelihood of reconstructing the test set under a specific model.",
"The lower the negative log likelihood, the better the model performs.",
"The x-axis shows different numbers of latent states N used in the models.",
"As we can see, all of the VRNN-based models surpass the TM-HMM models by a large margin and are more invariant to the change in N on both datasets.",
"Especially when N is small, the performance of HMM is not stable.",
"Qualitatively, we compare the dialog structures generated by different models.",
"Fig. 2 and 3 show the discovered dialog structures using the D-VRNN model, and Fig. 7 in the Appendix shows the dialog structures learned by HMM.",
"Each circle in these figures represents a latent state z t with expert interpreted meanings, and the directed-arrows between the circles represent the transition probabilities between states.",
"Human experts interpret each z t consistently by going through conversational exchanges assigned to the same z t .",
"For a better visualization effect, we only visualize the transitions with a probability equal or greater than 0.2 in the figures.",
"We observe reasonable dialog structures in Fig. 2 and 3.",
"The D-VRNN captures the major path looking for restaurant (anything else) get restaurant address and phone thank you in the restaurant search task.",
"It also captures what's the weather place and time api call in the weather report task.",
"However, we do not get a dialog structure with entities separated (such as food type I like area I prefer price range I want ... ).",
"Because users know the system's capability and tend to give as many entities as possible in a single utterance, so these entities are all mingled in one dialog exchange.",
"But the model is able to distinguish the presenting match result state from the presenting no match result state (on the top of Fig. 2), which is important in making correct predictions.",
"But in Fig.",
"7(a) generated by HMM, even if we set 10 states in the HMM, some states are still collapsed by the model because they share a similar surface form.",
"And in Fig.",
"7(b) also by HMM, the dialog flow skips the what can I do state, and goes from the start directly to pro-viding place and time, which is not reasonable.",
"Another interesting phenomenon is that there are two thank you concentrated states in Fig. 2.",
"This is because, users frequently say thank you on two occasions, 1) after the dialog system presents the restaurant information, most users will say thank you; 2) then the system will ask is there anything else I can help you with, after which users typically respond with thank you, that's all.",
"This interesting structure is a reflection of the original rules in the dialog system.",
"Moreover, in Fig. 3, we see 1) transitions from both directions between states place and time and api call, and 2) transitions from api call to itself, as there are simulated speech recognition errors in the data and the system needs to confirm the entity values and update the API query.",
"Even though the three proposed VRNN-based models have similar structures, they perform differently and are able to compensate each other.",
"For example, in Fig.",
"8(b) in the Appendix, DD-VRNN is able to recognize a new state not done yet, what's the weather, when users start a new query.",
"This can complement D-VRNN's result.",
"Quantitatively, the three model variants also perform differently.",
"On the restaurant test set shown in Fig. 4, DD-VRNN has the best overall performance compared with other models.",
"Especially when the number of states N is small (e.g. 5 or 7 states), the advantage of the direct transition is more obvious.",
"We think the reason behind is that it's easier and more accurate to model the direct transitions between a smaller set of states; as we increase N , the direct transitions between states become less and less obvious and therefore, help less on the predictive power.",
"To our surprise, putting more weights on the named entities has a negative impact on the performance on the restaurant dataset.",
"The underlying reason might be that the emphasis on the named entities shifts the focus of the model from abstract latent representation to a more concrete word token level.",
"However, on the simulated weather dataset shown in Fig. 5, NE-D-VRNN performs relatively well.",
"It might be because the weather dataset is completely simulated, which makes it easier to recognize the named entities.",
"With a more accurate named entity recognition, NE-D-VRNN is able to help the performance.",
"Overall, D-VRNN is the most stable one across datasets, so we will use D-VRNN in the following experiments.",
"The ultimate goal of such a structure discovery model is to utilize the structure to facilitate downstream tasks, such as dialog system training.",
"Therefore, we propose to incorporate the dialog structure information into the reward function of the RL training.",
"We believe that the transition table learned from the original dataset will guide the policy to make better decisions and converge faster by encouraging the policy to follow the real-data distribution.",
"Similar to other RL models, we build a user simulator to perform the RL training.",
"Please refer to the Appendix for the training details.",
"We use policy gradient method (Williams, 1992) to train the dialog policy.",
"Traditional reward functions give a positive reward (e.g. 20) after the successful completion of the task, 0 or a negative reward to penalize a failed task and -1 at each extra turn to encourage the system to finish the task sooner rather than later (Williams et al., 2017).",
"But this type of delayed reward functions doesn't have immediate rewards at each turn, which makes the model converge slowly.",
"Therefore, we propose to incorporate the learned conversational exchange transition information as an immediate reward.",
"The intuition is that in order to complete a task sooner, most users will follow a certain pattern when interacting with the system, for example, a typical flow is that users first give the entities information such as location, then ask for the restaurant information and finally, end the conversation.",
"If we can provide the RL model with the information on what action is more likely to follow another action, the model can learn to follow real-data distributions and make better predictions.",
"We encode the transition information through KL-divergence, a measurement of the distance between two distributions, in the reward function.",
"We design four types of reward functions and describe each of them in Algorithm 1 and Eq.",
"10 in the Appendix.",
"The traditional delayed reward is the baseline.",
"The second reward function, Rep-reward, uses constant penalty for repeated questions, as penalizing repetition yields better results, according to Shi and Yu (2018).",
"The third reward function, KL-reward, incorporates the transition table information.",
"From the RL model, we get the predicted probability p pred for different actions; from the D-VRNN model, we get the transition probability p trans between states and each state is translated to an action.",
"We calculate the negation of the KL-divergence between p trans and p pred and use it as the immediate reward for every turn.",
"This immediate reward links the predicted distribution with the real-data distribution by calculating the distance between them.",
"The fourth reward function (KL+Rep) gives an additional -2 repetition penalty to the KL-reward to test the combination effect of the two types of penalties.",
"We evaluate the RL performance by the average success rate, shown in Fig. 6.",
"We observe that all the experimental reward functions greatly improve the performance of the baseline.",
"We also observe that the baseline has a higher variance, which is due to the inefficient delayed rewards.",
"Moreover, both the KL-reward and the KL-Rep reward reach Figure 6: Average success rate of RL models with different reward functions.",
"a higher success rate at around 10,000 iterations than the Rep-reward and converges faster.",
"These two reward functions also achieve a better convergent success rate after 10,000 iterations.",
"This suggests that adding KL-divergence of p trans and p pred into the reward helps develop a better policy faster.",
"The good performance comes from the transition probability p trans learned by the discrete-VRNN.",
"p trans summarizes the communication pattern of most users in the real world and the KL-divergence measures the distance between the predicted distribution p pred and p trans from the real dataset.",
"The RL model processes this KL-reward signal and learns to minimize the gap between p trans and p pred .",
"As a result, p pred will follow closer to the real data distribution, leading the model to converge faster to a better task success rate.",
"In this way, we successfully incorporate the dialog structure information from the real data into the RL system training.",
"We observe that the use of KL reward improves the performance significantly in terms of both convergence rate and the final task success rate.",
"Further, the combination of KL and repetition reward makes the model more stable and achieves a better task success rate, compared with the model with only KL-reward or Rep-reward.",
"This indicates that the KL-reward can be combined with other type of rewards to achieve a better performance.",
"A key challenge for discourse analysis and dialog system building is to extract the latent dialog structure.",
"We adopted the VRNN with discrete latent variables to learn the latent states of each conversational exchange and the transitions between these states in an unsupervised fashion.",
"We applied the algorithm on a restaurant search task and a simulated weather report task, and evaluated the model quantitatively and qualitatively.",
"We also proposed a way to incorporate the learned dialog structure information into a downstream dialog system building task.",
"We involved the dialog structure in the RL reward design, which made the model converge faster to a better task success rate.",
"The performance of the Discrete-VRNN model has a major impact on the performance of the policy training.",
"We plan to further improve the dialog structure learning process.",
"Currently, we try to capture the status of the named entities by increasing the weights on the entities, which focus on the concrete word token level.",
"In the future, we may use more sophisticated ways to encode the entity information into the latent states."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"objective",
"result",
"abstain",
"objective",
"abstain",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"result",
"method",
"method"
] |
[
"Abstract Turn-level user satisfaction is one of the most important performance metrics for conversational agents.",
"It can be used to monitor the agent's performance and provide insights about defective user experiences.",
"While end-to-end deep learning has shown promising results, having access to a large number of reliable annotated samples required by these methods remains challenging.",
"In a large-scale conversational system, there is a growing number of newly developed skills, making the traditional data collection, annotation, and modeling process impractical due to the required annotation costs and the turnaround times.",
"In this paper, we suggest a self-supervised contrastive learning approach that leverages the pool of unlabeled data to learn user-agent interactions.",
"We show that the pre-trained models using the self-supervised objective are transferable to the user satisfaction prediction.",
"In addition, we propose a novel few-shot transfer learning approach that ensures better transferability for very small sample sizes.",
"The suggested few-shot method does not require any inner loop optimization process and is scalable to very large datasets and complex models.",
"Based on our experiments using real data from a large-scale commercial system, the suggested approach is able to significantly reduce the required number of annotations, while improving the generalization on unseen skills.",
"Nowadays automated conversational agents such as Alexa, Siri, Google Assistant, Cortana, etc. are widespread and play an important role in many different aspects of our lives.",
"Their applications vary from storytelling and education for children to assisting the elderly and disabled with their daily activities.",
"Any successful conversational agent should be able to communicate in different languages and accents, understand the conversation Work done as an intern at Amazon Alexa AI.",
"context, analyze the query paraphrases, and route the requests to various skills available for handling the user's request (Ram et al., 2018).",
"In such a large-scale system with many components, it is crucial to understand if the human user is satisfied with the automated agent's response and actions.",
"In other words, it is desirable to know if the agent is communicating properly and providing the service that is expected by the user.",
"In the literature, it is referred to as targeted turn-level satisfaction as we are only interested in the user's satisfaction for a certain conversation turn given the context of the conversation, and not the overall satisfaction for the whole conversation (Park et al., 2020).",
"Perhaps the most basic use of a user satisfaction model would be to monitor the performance of an agent and to detect defects as a first step to fix issues and improve the system.",
"Anticipating user dissatisfaction for a certain turn in a conversation, an agent would be able to ask the user for repeating the request or providing more information, improving the final experience.",
"Also, a powerful user satisfaction model can be used as a ranking or scoring measure to select the most satisfying response among a set of candidates and hence guiding the conversation.",
"The problem of user satisfaction modeling has recently attracted significant research attention (Jiang et al., 2015; Bodigutla et al., 2019; Park et al., 2020; Pragst et al., 2017; Rach et al., 2017).",
"These methods either rely on annotated datasets providing ground-truth labels to train and evaluate (Bodigutla et al., 2019) or rely on ad hoc or human-engineered metrics that do not necessarily model the true user satisfaction (Jiang et al., 2015).",
"Access to reliable annotations to be used in building satisfaction models has been very challenging partly due to the fact that a large-scale conversation system supports many different devices as well as voice, language, and application components, providing access to a wide variety of skills.",
"The traditional approach of collecting samples from the live system traffic and tasking human annotators to label samples would not be scalable due to the cost of annotations as well as the turn-around time required to collect and annotate data for a new skill or feature.",
"Note that onboarding new skills in a timely manner is a crucial to ensure active skill developer engagement.",
"To address this problem, we propose a novel training objective and transfer learning scheme that significantly improves not only the data efficiency but also the model generalization to unseen skills.",
"In summary, we make the following contributions: We propose a contrastive self-supervised training objective that can leverage virtually any unlabeled conversation data to learn user-agent interactions.",
"We show that the proposed method can be used to pre-train state-of-the-art deep language models and the acquired knowledge is transferable to the user satisfaction prediction.",
"We suggest a novel and scalable few-shot transfer learning approach that is able to improve the label efficiency even further in the case of few-shot transfer learning.",
"We conduct extensive experiments using data from a large-scale commercial conversational system, demonstrating significant improvements to label efficiency and generalization.",
"The traditional approach to evaluating a conversational system is to evaluate different functionalities or skills individually.",
"For instance, for a knowledge question answering or web search skill, one can use response quality metrics commonly used to evaluate search system and ranking systems such as nDCG (Jrvelin et al., 2008; Hassan, 2012; Fox et al., 2005).",
"While these methods provide justifi-able measures for certain skills, they are not extendable to a large number of skills, especially for skills without a set of proper hand-engineered features and metrics, or newly developed third-party skills (Bodigutla et al., 2019).",
"Another, more general, line of research is to evaluate the performance of a conversation system from the language point of view.",
"Here, the objective is to measure how natural, syntactically and semantically, an automated agent is able to interact with a human user.",
"For instance, using generic metrics such as BLEU (Papineni et al., 2002) or ROUGE (Lin, 2004) one can measure how the agent's responses are consistent with a set of provided ground-truth answers.",
"However, these approaches not only suffer from shortcomings such as inconsistency with the human understanding (Liu et al., 2016; Novikova et al., 2017) but also are not practical for a real-world conversation system due to their dependence on ground-truth responses.",
"A more recent approach is to use human annotations specifically tailored for the user satisfaction task as a source of supervision to train end-to-end prediction models (Bodigutla et al., 2019).",
"Jiang et al. (2015) suggested training individual models for 6 general skills and devised engineered features to link user actions to the user satisfaction for each studied skill.",
"Park et al. (2020) proposed a hybrid method to learn from human annotation and user feedback data that is scalable and able to model user satisfaction across a large number of skills.",
"Gutmann and Hyvrinen (2010) was the first study to propose the idea of noise-contrastive learning in the context of a capturing a distribution using an objective function to distinguish samples of the target distribution from samples of an artificially generated noise distribution.",
"Contrastive predictive coding (CPC) (Oord et al., 2018) suggested the idea of using an NCE objective to train an autoregressive sequence representation model.",
"Deep InfoMax (Hjelm et al., 2018) used self-supervised contrastive learning in an architecture where a discriminator is trained to distinguish between representations of the same image (positive samples) or representations of different images (negative samples).",
"While many different variations of contrastive methods have been suggested, the main idea remains the same: defining a self-supervised objective to distinguish between the hidden representations of samples from the original distribution and samples from a noise distribution (Trinh et al., 2019; Devon et al., 2020; Yao et al., 2020).",
"Few-shot transfer learning is a very active and broad subject of research.",
"We limit the scope of our study to methods in which a form of gradient supervision is provided by a target task to ensure the efficient transferability of representations trained on a source task.",
"Lopez-Paz and Ranzato (2017) suggested the idea of joint multi-task training and using the cosine similarity of the concatenated network gradients from the source and target tasks.",
"For gradients with negative cosine distance, they project the source gradients to a more aligned direction by solving a quadratic programming problem.",
"Luo et al. (2020) continued that line and suggested a method in the context of few-shot transfer learning, showing that using even a few samples from the target task can significantly improve the transferability of the trained models.",
"Li et al. (2020) presented a similar idea but suggested adjusting learning rates for each layer to improve the cosine similarity of different tasks.",
"While these methods show promising results, they only measure the similarity between concatenated gradient vectors consisting of all network parameters which is a very rough measure of alignment.",
"Also, they require solving for a quadratic or iterative optimization problem as an inner loop in the training procedure that can be computationally expensive and often prohibitive for large-scale problems.",
"In this paper, we consider the conversational interaction between a human user and an automated agent.",
"Each interaction consists of a set of turns in which the user provides an utterance and the agent provides appropriate responses.",
"A set of turns that are happening within a certain time window are grouped as a conversation session.",
"Formally, we can represent a session as a set of turns: S i = { ( U t =0 i , R t =0 i ) , . . . , ( U t = T i , R t = T i ) } (1) Here, S i represents session i consisting of a set of turns as tuples of utterance and responses, ( U ti , R ti ) , for the first turn t = 0 to the last turn t = T in that session.",
"In the context of turn-level user satisfaction modeling, we are interested in the classification of a certain targeted turn within a session as either satisfying (SAT) or dissatisfying (DSAT).",
"Note that the satisfaction here is defined based on the agent's response given a certain utterance and the context (i.e., other session turns).",
"We use the notation Y t i { SAT , DSAT } to indicate the user satisfaction for the targeted turn t = t of session i .",
"See Figure 2 for examples of SAT/DSAT interactions.",
"In this study, we use real-world data from Alexa, a large-scale commercial conversational agent.",
"Specifically, we use a dataset of about 891,000 real-world conversation sessions in which a certain turn within each session is annotated by a human annotator as SAT or DSAT.",
"Human annotators had access to the session context and followed a standard labeling protocol (further information is provided in Appendix A).",
"As a preprocessing step, we limited turns within each session to a window of five turns: at most two turns before the targeted turn, the targeted turn, and at most two turns after the targeted turn.",
"This labeled dataset is denoted as D sup .",
"In addition to D sup , we also use a large pool of real-world session data without any annotation or label.",
"This dataset is about twice the size of D sup , but as we are not limited to targeted turns, we keep all session turns and decide context windows based on a randomized data augmentation step.",
"The resulting effective sample size is significantly larger than D sup .",
"We denote this unlabeled dataset as D unsup .",
"As both datasets were sampled from real traffic, we ensured that there is no overlap between D unsup and the evaluation splits of D sup .",
"The conversations cover a wide variety of internally developed (1p) and third-party (3p) developer skills.",
"Due to the imbalanced traffic, in our 1 Due to confidentiality concerns, we are not able to disclose the exact annotation protocols and data specifications.",
"datasets, there is a huge variation between the number of samples for different skills.",
"For instance, 1p skills such as music or weather have hundreds of thousands of samples while many 3p skills only have less than 10 samples throughout our datasets.",
"To properly evaluate the performance of our predictors on such imbalanced data, we proposed a novel approach to split the data and to evaluate.",
"We build two test sets: a test set measuring in-domain performance and another test set to measure the out-of-domain generalization.",
"The in-domain test set consists of samples from skills that the train set covers.",
"The out-of-domain test set measures the performance on skills that are not covered by the train set.",
"Ideally, we would like to observe good classification performance in both test splits, indicating the ability of our models to learn and model the current major traffic and to generalize to less frequent or future traffic.",
"Based on this, we split D sup to 70% train, 15% validation, and the rest for the test (about 1 / 5 of test samples are out-of-domain and 4 / 5 are in-domain).",
"The in-domain and out-of-domain test sets consist of 17 and 275 skills, respectively.",
"The D unsup is randomly split to 80% train and the rest for validation, regardless of skills.",
"Table 1 presents a summary of dataset statistics for D sup .",
"Figure 1 shows a high-level drawing of the network architecture used in our experiments.",
"It consists of a language model (LM) that encodes utterance and response pairs to vector representations.",
"Here, we consider up to T turns before and after the targeted turn.",
"To further summarize the list of the previous or next turns, we use GRU layers (Chung et al., 2014).",
"Then, an average pool is used to produce a representation vector, z , for each session.",
"Note that before the pooling, simple non-linear MLPs are used to transform each partial representation.",
"Finally, z is used as an input to a set of different head networks, responsible for making predictions for different objectives.",
"Regarding the LM, we use the standard BERT encoder (Devlin et al., 2018) architecture pre-trained as suggested by Liu et al. (2019).",
"To make a fixed-length representation of the utterance response pairs i.e. turn semantics, we use an average pool at the last encoder layer of the BERT token representations.",
"We also tried other approaches such as using the classification token instead of pooling, but based on our initial results simple pooling performed consistently better.",
"We share our BERT-based LM parameters across the network to encode the session turns.",
"However, we train separate GRU networks to summarize the previous and next turns.",
"The output dimension of the LM is equal to 768 , the size of the standard BERT hidden layer.",
"The hidden layer and output size of our GRUs are 256 , and we use 2-layer bidirectional GRUs.",
"Each head is a simple MLP with a single hidden layer of size 256 followed by a ReLU nonlinearity.",
"The final network consists of about 117 .",
"7 million parameters from which about 110 million is related to BERT and the rest is for GRUs, heads, etc. 3.4 Supervised Learning Baseline As a baseline approach, we use the network defined in Section 3.3 with a binary classification head to distinguish SAT and DSAT samples.",
"Here, we use labels provided by D sup and a binary cross-entropy (BCE) loss function.",
"An Adam optimizer (Kingma and Ba, 2014) with a batch size of 512 is used to train the network for 10 epochs.",
"The base learning rate for all non-BERT layers is set to 10 3 , while for BERT layers, we use a smaller learning rate of 5 10 5 .",
"The learning rates are decayed with a factor 5 twice at 60% and 80% of total iterations.",
"Unless indicated otherwise, we use a similar training setup for other experiments suggested in this paper.",
"We define a self-supervised objective in which the model is tasked to distinguish real sessions from unreal (or noisy) sessions.",
"Any unlabeled dataset, such as D unsup can be used to sample real sessions.",
"To generate unreal textual information, different approaches have been suggested in the literature such as back-translation (Fang and Xie, 2020), generative modeling (Liu et al., 2020), or even random word substitutions.",
"In this work, we leverage the multi-turn and structured nature of sessions to generate noise samples by simply shuffling the targeted utter-ances/responses within each training batch (see Figure 3 for an example).",
"Intuitively, the noise samples are sessions in which the targeted utterance or response does not belong to the rest of the session.",
"Therefore, the model has to capture the joint distribution of the context and targeted turns.",
"Algorithm 1 shows an overview of the sample generation and training process for the proposed contrastive objective.",
"Sample 1 (+) U : Play R : What do you want me to play?",
"Sample 2 (+) U : What time is it?",
"R : The time is 12:55 pm Sample 3 (-) U : Play R : The time is 12:55 pm Sample 4 (-) U : What time is it?",
"R : What do you want me to play?",
"The objective introduced in Section 4.1 is not directly applicable to be used as a user satisfaction model. One approach to leverage the pool of unsupervised data is to pre-train the model on unlabeled data using the self-supervised objective, and then attach a classifier head and finetune the network to distinguish SAT and DSAT samples. In our implementation, we pre-train using the self-supervised objective on D unsup for 10 epochs, then train a classifier head on D sup for another 10 epochs; adjusting the learning rates for the network body to 0 . 1 of the base learning rates (see Section 3.4 for more information on the learning rate setup).",
"In the pretraining approach, we solely relied on the loose semantic relationship between the self-supervised and the user satisfaction modeling tasks. However, it is desirable to have a representation that is not only solving the self-supervised task but is also useful for the final objective. In other words, we have a source task ( S ) which we have a large number of training samples and a target task ( T ) with a limited number of samples that is our main interest. The idea is to use information from the target task during the source training such that the trained model is most compatible with the target.",
"Let us assume we have datasets DS and DT corresponding to the source ( S ) and target ( T ) tasks as well as inference functions for each task: f S ( . | , S ) and f T ( . | , T ) . In this notation, represents",
"represents shared network parameters (i.e., the body in our architecture) and represents task-specific parameters (i.e., a head in our architecture). Formally, when optimizing for task S , we are interested in:",
"arg min SE x ,y DS [ LS ( f S ( x | , S ) , y )] , (2)",
", SE where LS is the loss function for the source task. A simple gradient descent step to solve this problem can be written as: t +1 t E ( LS ( f S ( x | t , tS ) , y )) , t +1 S tS SE ( LS ( f S ( x | t , tS ) , y )) . (3) However, we are interested in optimization steps that do not increase the loss value for task T : E x ,y DT [ LT ( f T ( x | t +1 , t +1 T ) , y )] E x ,y DT [ LT ( f T ( x | t , tT ) , y )] . (4)",
"Considering (4) as an optimization constraint can potentially halt the optimization because improvements to the source objective do not directly translate to improvements to the target task. In other words, the constraint above may not be always directly satisfiable using gradient steps in the source domain.",
"To overcome this issue, instead of using gradient descent, we define the problem as a Randomize Block Coordinate Descent (RBCD) (Nesterov, 2012; Wright, 2015) optimization. At each RBCD iteration, only a subset of model parameters, i.e. a block noted as b , is sampled from a distribution B and used for the gradient descent update 2 :",
"Note that we only use the RBCD optimization for the network body parameters ( ), while the head parameters ( S and T ) are optimized using a regular gradient descent optimization.",
"In this work, we propose the idea of adjusting the block selection distribution, B , such that parameters having more aligned source and target gradients have more chance of being selected:",
"where the inputs to LS and LT are omitted for brevity. Intuitively, (6) is used to discourage parameter updates that are not aligned with the T task",
"which can be viewed as a soft method to enforce the constraint in (4). Here, there are multiple options to define the granularity of the block selection such as layer-wise, neuron-wise, or element-wise. Based on our initial experiments, we found that defining the block elements to be layer-wise results in the best performance.",
"Algorithm 2 shows an outline of the proposed method. At each iteration within the training loop, we back-propagate the S and T losses and store the gradients of layer parameters. For parameters related to the S head, we follow a simple gradient descent update. For body parameters, we only update the parameters if the inner product of the S and T tasks is positive or at a small random outcome with the probability of . To guarantee the convergence of the source task, we allow all parameters to be selected at each step at least with a very small probability of . In our experiments, we consider as a hyperparameter taking values in { 0 . 001 , 0 . 005 , 0 . 01 , 0 . 05 , 0 . 1 } . Additional care is required when updating the T head layer parameters as the DT is usually much smaller than DS and the T head is prone to overfitting. We use a validation set from task T to detect over-fitting for the T head and early stop the updates. Note that a hyperparameter is used to set the frequency of the T head updates after the early stopping. Having less frequent head updates allows the T head to gradually improve and adapt to the changes in the body without getting overfitted. In our experiments, we search for proper values in { 0 . 001 , 0 . 002 , 0 . 005 , 0 . 01 } .",
"In contrast to other works in the literature which mostly leverage the alignment of concatenated gradients (Lopez-Paz and Ranzato, 2017; Luo et al., 2020), we propose layer-wise similarity measurements providing more granularity and more adaptability. Also, the suggested approach does not require any inner loop optimization process or gradient projection and hence is scalable to large-scale problems. The only computational and memory overhead is to store the model gradients with respect to each task and to compute inner products between the layer parameters.",
"The method explained in this section is general to few-shot transfer learning and joint training settings where a large source dataset is being used to achieve representations that are most useful for a final target task. For our use-case, we use the suggested approach considering the source task, S ,",
"as the self-supervised contrastive objective and the target task, T , as the user satisfaction prediction task. In our experiments, after the joint training process, we reinitialize the T head and finetune the network for the T task. We found this approach to be helpful to achieve the best results as the jointly trained T head is often slightly overfitted.",
"We used PyTorch (Paszke et al., 2017) to train our models. For each case, we continue the training for the maximum number of epochs (10 in our experiments) and select the best model based on the validation performance. We conducted our experiments on a cluster of 48 NVIDIA V100 GPUs (16 GB memory, 8 GPUs per machine). It took between about 6 hours to 27 hours to run individual experiments, depending on the case.",
"For each experiment, we report Area Under the ROC Curve (AUC-ROC) and Area Under the Precision-Recall Curve (AUC-PR) as the performance measures. The results for the in-domain and out-of-domain held-out test sets are reported",
"separately. Note that there is an imbalance in the frequency of SAT and DSAT labels, and also there is a difference in the label distribution for the in-domain and out-of-domain test sets. To ensure the statistical significance of the results, each experiment is repeated four times using random initializations reporting the mean and standard deviations.",
"Figure 4 shows a comparison of the in-domain test results for the supervised training and the self-supervised contrastive pretraining methods. For each case, we report the in-domain test performance using models trained with a different number of annotated training samples. The x-axis is plotted in the log scale. It can be seen that the contrastive self-supervised approach is much more data-efficient compared to the supervised approach as it leverages the pool of unlabeled data.",
"Figure 5 shows a comparison between the supervised training and the self-supervised pretraining methods on the out-of-domain test set. Similar to the in-domain case, there is a significant gap between the labeled data efficiency of these approaches. However, compared to the in-domain case, using even all training samples, the gap does not appear to close. In other words, for the out-of-domain test set the self-supervised approach is not only more data-efficient but also tends to generalize better. In a real-world conversation system, the out-of-domain generalization can be crucial as many different new skills are being developed and included in the system every day, making the traditional in-domain human annotation less practical due to the required annotation turnaround time.",
"In Figure 6 and Figure 7, we compare the in-domain and out-of-domain performance of the self-supervised pretraining method with the proposed few-shot learning method. As it can be seen from Figure 6, the in-domain AUC-PR and AUC-ROC for the few-shot learning are consistently better than the self-supervised pretraining approach. Note that the performance gap closes at about 5000 samples; perhaps because it is enough training data for fine-tuning and successfully transferring the pre-trained model. The out-of-domain performances as reported in Figure 7 show better results for the few-shot approach but the margin of improvement is relatively smaller than the in-domain case.",
"age human annotation data for turn-level satisfaction prediction, excluding approaches using human-engineered and skill-specific metrics as well as methods that only consider the quality of conversation from the language perspective.",
"Table 3 in Appendix B presents a qualitative comparison of the baseline supervised training and the self-supervised approach suggested in this paper. Here, to highlight the generalization and data-efficiency of each method, we limit the number of annotated samples to 1024 random samples from the training set of the D sup dataset. For this table, we provide sample sessions that are chosen with an emphasis on more difficult requests, unclear requests, or requests involving 3p skills. U and R indicate the targeted utterance and response, while U + x and R + x indicate the context utterance and responses appearing x turns after the targeted turn.",
"From the provided examples, it can be inferred that the self-supervised approach provides a deeper understanding of the user-agent interaction and is",
"able to generalize better even for infrequent 3p skills. It is consistent with the quantitative results presented in the paper.",
"This paper suggested a self-supervised objective to learn user-agent interactions leveraging large amounts of unlabeled data available. In addition to the standard fine-tuning approach, this paper presented a novel few-shot transfer learning method based on adjusting the RBCD block selection distribution to favor layer parameters with source and target gradients pointing in similar directions. According to the experiments using real-world data, the proposed approach not only requires significantly less number of annotations, but also generalizes better for unseen out-of-domain skills."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"other",
"other",
"abstain",
"result",
"abstain",
"abstain",
"other"
] |
[
"Source code processing heavily relies on the methods widely used in natural language processing (NLP), but involves specifics that need to be taken into account to achieve higher quality.",
"An example of this specificity is that the semantics of a variable is defined not only by its name but also by the contexts in which the variable occurs.",
"In this work, we develop dynamic embeddings, a recurrent mechanism that adjusts the learned semantics of the variable when it obtains more information about the variable's role in the program.",
"We show that using the proposed dynamic embeddings significantly improves the performance of the recurrent neural network, in code completion and bug fixing tasks.",
"Deep learning is now actively being deployed in source code processing (SCP) for solving such tasks as code completion (Li et al., 2018), generating code comments (Alon et al., 2019), and fixing errors in code (Vasic et al., 2019a).",
"Source code visually looks like a text, motivating the wide use of NLP architectures in SCP.",
"A lot of mod-ern SCP approaches are based on recurrent neural networks (Le et al., 2020), other popular architectures are transformers, and convolutional and graph neural networks.",
"Utilizing the specifics of source code as a data domain may potentially improve the quality of neural networks for SCP.",
"These specifics include three main aspects.",
"Firstly, the source code is strictly structured, i.",
"e.",
"the source code follows the syntactic rules of the programming language.",
"Secondly, the vocabularies may be large or even potentially unlimited, i.",
"e.",
"a programmer is allowed to define the identifiers of the arbitrary complexity.",
"Thirdly, the identifiers are invariant to renaming, i.",
"e.",
"renaming all the user-defined identifiers The work was done while working at Samsung-HSE Laboratory, HSE University.",
"does not change the algorithm that the code snippet implements.",
"The first two mentioned specifics have been extensively investigated in the literature.",
"For example, the tree-based architectures, such as TreeLSTM (Chen et al., 2018) or TreeTrans-former (Shiv and Quirk, 2019) allow for the utilization of the code's structure.",
"On the other hand, using byte-pair encoding (Karampatsis et al., 2020; Sennrich et al., 2016) or the anonymization of out-of-vocabulary identifiers (Chirkova and Troshin, 2021) deals with the unlimited vocabulary problem.",
"However, the property of source code being invariant to renaming user-defined identifiers has not been paid much attention to.",
"In this work, we aim to close this gap for the recurrent neural networks (RNNs).",
"Let us take a closer look at the invariance property.",
"In Fig. 1",
"(a) and",
"(b), a code snippet implementing a simple mathematical calculation is presented with two different variable naming schemes.",
"Both code snippets implement the same algorithm, i.",
"e.",
"are equivalent in the program se-mantics space, but have different text representations.",
"Classic NLP approach implies using the embedding layer as the first layer in the network where learnable embeddings encode the global semantics of the input tokens.",
"In example",
"(a), the embeddings of variables x and y make sense, as these variables are usually used in mathematical calculations, but in example",
"(b), the embeddings of variables foo and foo2 do not reflect any semantics.",
"Moreover, even in case",
"(a), the semantics of identifier y reflected by its embedding are too broad, i.",
"e.",
"this identifier could be used in a lot of different calculations, while variable y has a much more specific role in the program, i.",
"e.",
"storing the result of the particular function.",
"The key idea of this work is that the embedding of a variable in the program should reflect the variable's particular role in this program and not only its name .",
"The name of the variable may act as the secondary source of information about the variable's role, but the main source of this information is the program itself, i.",
"e.",
"the contexts the variable is used in.",
"We develop the recurrent mechanism called dynamic embeddings that captures the representations of the variables in the program based on the contexts in which these variables are used.",
"Being initialized before a program processing, the dynamic embedding of a variable is updated each time the variable has been used in the program, see the scheme in Fig.",
"1(g).",
"We test the dynamic embedding approach in two settings: the standard setting with the full data, and the anonymized setting, when all variable names are replaced with unique placeholders var1 , var2 , var3 etc.",
"In the full data setting, we initialize the dynamic embeddings with standard embeddings, see Fig.",
"1(e), to implement the idea of the variable name being a secondary source of information about the variable's semantics.",
"In the anonymized setting, we initialize the dynamic embeddings using a constant initial embedding, the same for all identifiers, see Fig.",
"1(f).",
"In this setting, the variable names are not used at all, and the model detects the role of the variable purely based on the contexts in which the variable is used in the program.",
"Although being less practically oriented, the anonymized setting is a conceptually interesting benchmark, as it highlights the capabilities of deep learning architectures to understand the pure program structure , that is actually the main goal of SCP, without relying on the unstructured textual information contained in variable names.",
"In addition, the anonymized setting could be the case in practice, e.",
"g.",
"when processing the decompiled or obfuscated code (Lacomis et al., 2019).",
"In the experiments, we show that using the proposed dynamic embeddings significantly outperforms the model that uses the standard embeddings, called static embeddings in our work, in both described settings in two SCP tasks, namely code completion and bug fixing.",
"To sum up, our contributions are as follows: We propose the dynamic embeddings for capturing the semantics of the variable names in source code; To demonstrate the wide practical applicability of the proposed dynamic embeddings, we show that they outperform static embeddings in two different code processing tasks, namely code completion (generative task) and bug fixing (discriminative task), in the full data setting; We propose the version of the dynamic embeddings approach that does not use variable names at all, and show that it achieves high results in both tasks, sometimes even outperforming the standard model trained on full data (with variable names present in the data).",
"The possibility of improving deep learning models of source code by taking into account the invariance property of variable names has been superficially discussed in the literature.",
"Ahmed et al. (2018) replace variables with their types, while Gupta et al. (2017) and Xu et al. (2019) use the static embeddings for anonymized variables.",
"However, the existing SCP work did not consider developing a special architecture that dynamically updates the embeddings of the variables during processing a code snippet.",
"Our research is also related to the field of processing out-of-vocabulary (OOV) variable names.",
"The commonly used approaches for dealing with OOV variables are using the pointer mechanism (Li et al., 2018) or replacing OOV variables with their types (Hu et al., 2018).",
"As we show in our work, both methods may be successfully combined with the proposed dynamic embeddings.",
"In the context of NLP, Kobayashi et al. (2017) use a similar model with dynamic embeddings to process OOV and anonymized named entities in natural text.",
"In contrast to their approach, we apply dynamic embeddings to the whole vocabulary of variable names, and incorporate dynamic embeddings into the model that relies on the syntactic structure of code.",
"This results in more meaningful dynamic embeddings.",
"We firstly describe what format of the model input we use, i.",
"e.",
"the procedure of code preprocessing, and then describe our model.",
"At the end of this section, we discuss how we use the proposed model in two code processing tasks.",
"To capture the syntactic structure of an input code snippet, we convert it to an abstract syntax tree (AST), see Fig.",
"1(c) for the illustration.",
"In order to process the code snippet with an RNN, we need to convert the AST into a sequence.",
"We use the most popular approach that implies traversing the AST in the depth-first order (Li et al., 2018), see Fig.",
"1(d).",
"Recent research shows that using the AST traversal may be even more effective than using specific tree-based architectures (Chirkova and Troshin, 2020).",
"Each node in the AST contains a type , reflecting the syntactic unit, e.",
"g.",
"If or NameLoad .",
"Some nodes also contain values , e.",
"g.",
"a user-defined variable or a constant.",
"We insert the <EMPTY> value in the nodes that do not have values so that the input snippet is represented as a sequence of (type, value) pairs: I = [( t 1 , v 1 ) , . . . , ( t L , v L )] .",
"Here L denotes the length of the sequence, t i T denotes the type and v i V denotes the value.",
"The size of the type vocabulary T is small and determined by the programming language, while the size of the value vocabulary V may be potentially large, as it contains a lot of user-defined identifiers.",
"Given sequence I , the RNN outputs a sequence of hidden states [ h 1 , . . . , h L ] , h i R d hid , i = 1 , . . . , L .",
"These hidden states are used to output the task-specific prediction as described in Section 3.3.",
"We use the standard baseline recurrent architecture that initializes the hidden state with a learnable predefined vector h init R h hid : h 0 = h init , and then updates the hidden state at each timestep i = 1 , . . . , L :",
"Here, e v i R d val and e t i R d type denote the embeddings of the value and the type correspondingly.",
"Without loss of generality, we use the Long Short-Term Memory recurrent unit (LSTM) (Hochreiter and Schmidhuber, 1997).",
"In this work, we replace value embeddings e v i with dynamic embeddings described below.",
"Dynamic embeddings.",
"The general idea of dynamic embeddings is that the variable's embedding is updated in the RNN-like manner after each occurrence of the variable.",
"We first describe the updating procedure and then discuss the initialization strategy.",
"Since the dynamic embeddings change over timesteps, we use notation e v,i for the dynamic embeddings of value v at timestep i .",
"For example, for the value located at the i -th position in the input sequence, v i , the dynamic embedding after processing the i -th step is denoted as e v i ,i , and its previous state is denoted as e v i ,i 1 .",
"At each timestep i = 1 , . . . , L , we update the dynamic embedding e v i ,i of the current value v i and hidden state h i using two LSTMs: e v i ,i = LSTM dyn ( h i 1 , e t i ; e v i ,i 1 ) (1) e v,i = e v,i 1 , v (cid:54) = v i (2) h i = LSTM main ( e v i ,i 1 , e t i ; h i 1 ) (3) An illustration of this update procedure is given in Fig. 1(h), and the example scheme of processing a code snippet is given in Fig.",
"1(g).",
"LSTM main implements the recurrence over the hidden state, while LSTM dyn implements the recurrence over dynamic embeddings, and the same LSTM dyn is used to update the dynamic embeddings of different values at different timesteps.",
"We note that at timestep i , the dynamic embedding of only current value v i is updated, while the dynamic embeddings of other values do not change, as stated in Eq.",
"(2).",
"In practice, several dummy values, e.",
"g.",
"<EMPTY> , <UNK> and <EOF> , do not change their roles in different sequences.",
"We use static embeddings for these values.",
"Initializing dynamic embeddings.",
"The most reasonable strategy for initializing the dynamic embeddings is to use static embeddings: e v, 0 = e v where e v are the learnable embedding vectors of all values v in vocabulary V .",
"In this case, the model utilizes all the available information about the variable: the variable's name introduced by the programmer that is supposed to somehow reflect the mission of the variable, and the contexts in which the variable occurs (captured by hidden states).",
"In other words, the model firstly understands the loose role of the variable from its name and then finetunes this understanding, while learning more about what the variable is used for.",
"Another possible strategy is to ignore all the variable names and initialize all dynamic embeddings with a constant embedding: e v, 0 = e init , e init R d val , v V .",
"Although the initial embeddings of all values are the same, they will be updated differently, as different values occur in different locations in the program, and the dynamic embeddings will characterize these locations.",
"Interestingly, the described strategy ensures that if we rename all the variables in the program, the output of the RNN does not change.",
"Such a behaviour is consistent with the variable invariance property: renaming all user-defined variables does not change the underlying algorithm.",
"The common sense is that the architecture for processing source code should be consistent with the variable invariance property, and dynamic embeddings with the constant initial embedding fulfill this conceptual requirement.",
"On the other hand, commonly-used static embeddings are not consistent with the invariance property, i.",
"e.",
"renaming variables scheme results in using different embeddings and changes the predictions of the RNN.",
"As will be shown below, in practice, using both sources of information, namely variable names and variable occurrences in the program, performs better than relying on only one source of information.",
"In other words, dynamic embeddings with static embedding initialization outperform both static embeddings in the full data setting (relying only on variable names) and dynamic embeddings with constant initialization (relying only on variable occur-rences).",
"We test the proposed dynamic embeddings in two SCP tasks: code completion and variable misuse prediction.",
"Below, we describe how we make predictions in these tasks, using the output [ h 1 , . . . , h L ] of the RNN.",
"Code completion.",
"In code completion, the task is to predict the next type-value pair ( t i +1 , v i +1 ) given prefix [( t 1 , v 1 ) , . . . , ( t 1 , v i )] at each timestep i = 1 , . . . , L .",
"In our work, we focus on value prediction, as type prediction is a simple task usually solved with high quality in practice (Li et al., 2018).",
"We rely on the setup and the architecture of Li et al. (2018).",
"To predict the next value v i +1 based on [ h 1 , . . . , h i ] , we firstly apply the standard attention mechanism (Bahdanau et al., 2015), obtaining the context vector c i = (cid:80) ij =1 j h j , j denote attention weights, and then combine all available representations using a fully-connected layer: h i = W 1 h i + W 2 c i + W 3 h parent , where h parent is the hidden state of the parent node.",
"For computing the logit y v,i R of each value v , we reuse the dynamic embeddings e v,i of the input layer, as well as the static embeddings of several dummy values: y v,i = e Tv,i h i , and apply Softmax on top of y v,i to predict the probability distribution P valsi R | V | over next value v .",
"Finally, we use the pointer mechanism to improve the prediction of rare values.",
"We reuse attention scores [ 1 , . . . , i ] , (cid:80) ij =1 j = 1 , j (cid:62) 0 as a distribution over previous positions P posi R i , and use switcher s = ( w swit, 1 h i + w swit, 2 c i ) (0 , 1) to gather two distributions into one: R i = [ sP valsi , (1 s ) P posi ] .",
"To make the prediction, we select the largest element of vector R i ; if it corresponds to the value from the vocabulary, we output this value, if it corresponds to the position, we copy the value from that position.",
"To train the model, we optimize the cross-entropy loss, using as ground truth the values in the vocabulary for in-vocabulary values and the last occurrence of the value (if any) for out-of-vocabulary values.",
"Variable misuse prediction.",
"The variable misuse task implies outputting two pointers: the first one points to the location i in which the wrong value v i is used and the second one points to the location j that can be used to repair the bug by copying its value v j .",
"If there is no bug, the first pointer selects a special no-bug location.",
"In this task, we rely on the approach of Vasic et al. (2019b) and its implementation of Hellendoorn et al. (2020).",
"In addition, we change the format of the model input and use the depth-first AST traversal (Li et al., 2018).",
"We use the bidirectional LSTM, with each of the two LSTMs being equipped with its own dynamic embeddings.",
"As a result, we have two sequences of hidden states: [ h fw1 , . . . , h fw L ] and [ h bw1 , . . . , h bw L ] .",
"To make the prediction, we firstly combine two representations using a fully-connected layer: h i = tanh ( W 1 h fw i + W 2 h bw i ) , i = 1 , . . . , L and then use two other fully-connected layers to obtain logits y bug i R and y fix i R of each position i : y bug i = ( w bug ) T h i , y fix i = ( w fix ) T h i .",
"Finally, we apply Softmax over [ y bug1 , . . . , y bug L , y nobug ] and over [ y fix i ] Li =1 to obtain two distributions over positions.",
"Here, learnable y nobug R corresponds to a no-bug position.",
"The model is trained using the cross-entropy loss.",
"Data and preprocessing.",
"We conduct experiments on Python150k (Raychev et al., 2016a) and JavaScript150k (Raychev et al., 2016b) datasets.",
"Both datasets are commonly used in SCP and were obtained by downloading repositories from GitHub.",
"However, the train / test split released by the authors of the dataset does not follow the best practices of splitting data in SCP (Allamanis, 2019; LeClair and McMillan, 2019), so we use another train / test split released by Chirkova and Troshin (2020).",
"This split is based on the repositories, i.",
"e.",
"all files from one repository go either to train or test, and was deduplicated using the tools provided by Allamanis (2019), i.",
"e.",
"code files in the test set that are duplicated in the train set were removed; this is a common case in source code downloaded from GitHub.",
"In addition, the Python dataset includes only redistributable code (Kanade et al., 2020).",
"Splitting by repository and deduplication are highly important in SCP to avoid a percentage of testing accuracy being provided by the examples the model saw during training.",
"With the described new split, the results in our tables are not directly comparable to the results reported in other works.",
"To validate our implementation, we compared the quality of baseline models trained in our implementation with the quality reported in the papers describing these baselines, and observed that the numbers are close to each other (see details in Section 5.4).",
"For the code completion task, we use the entire code files as training objects, filtering out exceptionally long files, i.",
"e.",
"files longer than 3 10 4 characters.",
"The resulting training / testing set consists of 76K / 39K files for Python and of 69K / 41K for JavaScript.",
"The mean length of the code files in 567 / 669 AST nodes for Python / JavaScript.",
"For the variable misuse task, we select all top-level functions, including functions inside classes from all files, and filter out functions longer than 250 AST nodes, and functions with fewer than three positions containing user-defined variables or less than three distinct user-defined variables.",
"The resulting training / testing set consists of 417K / 231K functions for Python and 202K / 108K functions for JavaScript.",
"One function may occur in the dataset up to 6 times: 3 times with a synthetically generated bug and 3 times without bug.",
"The buggy examples are generated synthetically by choosing random bug and positions from positions containing user-defined variables.",
"The described strategy for injecting synthetic bugs is the same as in (Hellendoorn et al., 2020).",
"In both tasks, the size of the node type vocabulary is 330 / 44 for Python / JavaScript, the vocabulary of node values is limited to 50K of the most frequent values.",
"Metrics.",
"Following Li et al. (2018), we use accuracy to measure model quality in the code completion task, counting all predictions of <UNK> as wrong.",
"Following Hellendoorn et al. (2020), to measure the quality in the variable misuse task, we use the joint localization and repair accuracy (what portion of buggy values is correctly located and fixed).",
"Details.",
"In all our models, node type embeddings have 300 units, node value embeddings have 1200 units (for static embeddings), and the one-layer LSTM's hidden state has 1500 units.",
"The described model size matches the configuration of the model of Li et al. (2018).",
"The proposed dynamic embeddings of values have 500 units in all experiments to show that they outperform the static embeddings with much less dimension.",
"In the code completion task, we split the input AST traversals into the chunks, each chunk has the length of 50 AST nodes, and apply attention and pointer only over the last 50 positions.",
"In the variable misuse task, we pass the entire function's AST traversal to the model.",
"In code completion / variable misuse tasks, we train all models for 10 epochs with AdamW (Loshchilov and Hutter, 2019) / Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 / 0.0001, a learning rate decay of 0.6 after each epoch, a batch size of 128 / 32, and using weight decay of 0.01 / 0.",
"We also use early stopping for the variable misuse task.",
"For code completion, all hyperparameters are the same as in (Li et al., 2018).",
"We tuned hyperparameters to achieve convergence on the training set, for variable misuse.",
"We use the same hyperparameters for static and dynamic embedding models.",
"Both used datasets are large, which helps to avoid overfitting, thus the regularization is not much needed.",
"We firstly test the proposed dynamic embeddings in the setting without using the user-defined variable names, stored in node values.",
"Directly omitting values results in losing much information, this can be seen as replacing all the variables in a code snippet with the same variable var .",
"To save the information about whether two AST nodes store the same value or not, we anonymize values, i.",
"e.",
"we map the set of all node values in the program (except dummy values, e. g. <EMPTY> ) to the random subset of anonymized values",
"var1...varK , K is a size of the anonymized value vocabulary, we use K = 1000 .",
"For example, code snippet sum = sum + lst[i] may be transformed into var3 = var3 + var8[var1] , and stat = [sum / n; sum] into var1 = [var5 / var2; var5] .",
"All occurrences of the same value in the program, e.",
"g.",
"sum , are replaced with one anonymized value, but value sum may be replaced with different anonymized values in different programs.",
"Fig. 2 visualizes how the anonymization is applied to AST.",
"Although being not so practically oriented, the anonymized setting highlights the capabilities of the deep learning models to capture pure syntactic information from the AST, without relying on the unstructured text information laid in variable names.",
"In our opinion, this setting should become a must for the future testing of syntax-based SCP models, and the proposed dynamic embeddings could be used as a first layer in such models to capture an equal-not-equal relationship between values.",
"In the described anonymized setting, we compare the proposed dynamic embeddings (constant initialization) with the static embeddings, i.",
"e.",
"learning the static embeddings of",
"var1..varK .",
"Results for the code completion task.",
"In the code completion task, we consider three variants of the architecture: plain LSTM, and attentional LSTM with and without pointer.",
"We note that our goal is to compare the dynamic embeddings with the baseline in three setups, i.",
"e.",
"using three base architectures.",
"We do not pursue the goal of comparing base architectures.",
"Table 1 lists the results.",
"For all base architectures, the proposed dynamic embeddings outperform static embeddings by a large margin.",
"We note that the number of parameters in both architectures is approximately the same.",
"In the first two setups, with plain and attentional LSTMs, the models can only predict values by generating them from the vocabulary (no pointer), relying on the input and output embeddings of the values.",
"In these setups, the difference between static and dynamic embeddings is large, indicating that dynamic embeddings capture the semantics of the variables significantly better.",
"In the setup Model Code compl.",
"Still, the gap between them and dynamic embeddings is large.",
"The portion of correct predictions made using the pointer is 25% for static embeddings and only 0.01% for dynamic embeddings.",
"This shows that dynamic embeddings actually replace the pointer mechanism, performing better.",
"This also explains why the difference in quality of dynamic embeddings between attentional LSTM and pointer LSTM is very small.",
"Interestingly, on the Python dataset, the model with dynamic embeddings trained in the anonymized setting outperforms the conventionally used static embedding model trained in the full data setting, although the first model uses much less information during training.",
"The explanation is that the first model predicts rare values much better than the second model: the accuracy of rare 1 val-1 By rare values, we mean values outside top-1000 frequent values.",
"ues prediction is 27% for the first model and 11% for the second, for the pointer LSTM model.",
"On the contrary, frequent values are easier to predict with static embeddings: the accuracy of predicting frequent values is 53% for the first model and 57% for the second model.",
"The total frequencies of rare and frequent values are approximately the same and equal to 25% (the rest 50% are EMPTY values, they are predicted with similar quality with both models).",
"As a result, when counting accuracy over all values, the first model outperforms the second one.",
"However, on the JavaScript dataset, the first model does not outperform the second one.",
"We analysed the example predictions of both models on both datasets and found that in JavaScript, there are a lot of short code snippets commonly used in different projects.",
"This is expected since JavaScript is mostly used for one purpose, web development, while Python is used for a lot of different purposes.",
"As a result, for JavaScript, the total frequency of top-1000 values is 32% (higher than for Python), while the total frequency of rare values is 22% (less than for Python).",
"The commonly used code snippets are easy to predict in the full data setting but hard to predict in the anonymized setting: the accuracy of predicting frequent values is only 44% for the first model and 54% for the second model.",
"The rare values are still better predicted with dynamic embeddings, but with the gap smaller than for Python: the accuracy of rare values prediction is 23% for the first model and 17% for the second one.",
"The gap is smaller since rare values also occur in the commonly used code snippets which improves the performance of the second model on rare values.",
"When counting accuracy over all values, the second model outperforms the first one.",
"Results for the variable misuse task.",
"Table 2 lists the joint accuracies of the proposed model and the baseline in the anonymized setting.",
"Again the dynamic embeddings outperform static embeddings by a large margin.",
"Moreover, the dynamic embeddings outperform even the commonly used static embedding model trained on the full data, for both datasets.",
"We think the reason is that we use the dynamic embeddings in two layers of bi-directional LSTMs and these bi-directional dynamic embeddings provide a rich representation of the input code snippet.",
"We now test the proposed dynamic embeddings in the full data setting, i.",
"e.",
"we compare a commonly used model with static embeddings and the proposed model with dynamic embeddings (static embedding initialization).",
"The initialization of dynamic embeddings was discussed in Sec. 3.2.",
"Both models process the full data (see illustration in Fig. 2. The results for both tasks are presented in Table 3 and show that the dynamic embeddings outperform the static embedding model in all cases.",
"We note that dynamic embeddings could be easily incorporated into any recurrent SCP architecture.",
"In our experiments we incorporate them into the base models of Li et al. (2018) and Vasic et al. (2019b) and show that the dynamic embeddings significantly improve these base models.",
"We also note that we use the dynamic embeddings of 500 units while static embeddings have 1200 units.",
"The number of parameters in the dynamic embedding layer, 2.6M, is much smaller than that of the main LSTM layer, 13.8M, and two orders smaller than the number of parameters in the embedding layer, 134M (the numbers are given for the code completion task).",
"Figure 3 visualizes the predictions of different models for three example code snippets in Python.",
"We highlighted three scenarios when the dynamic embedding model outperforms the static embedding model in the full data setting: 1) capturing the specific role of the variable, e.",
"g.",
"variable qual indexes sequence in the list comprehension in the left example; 2) associating variables with each other, e.",
"g.",
"in the central example, variable name always goes with 0 , and variable post always goes with 1 ; 3) repeating variables when they occur in the similar context they have already been used, e.",
"g.",
"zeros in the right example.",
"In all these examples, the proposed dynamic model makes correct predictions, while the static embedding model makes mistakes, in the full data setting.",
"In the anonymized setting, all models tend to predict previously used variables, and again the dynamic embedding model captures the described relationships, and the static embedding model tends to simply predict the most frequent previously used variable.",
"In our experiments, we use the setup of Li et al. (2018) in the code completion task and of Hellendoorn et al. (2020) in the variable misuse task, but with our custom data split, see details in Section 4. To maintain the possibility of comparing our results to these works, we trained the static embedding models in the full data setting, with the commonly used train / test splits of Python150k and JavaScript150k datasets.",
"For code completion with vocabulary size 50K and pointer network, using exactly the same setup as in (Li et al., 2018), we achieved accuracy of 69.39% / 80.92%, while the paper reports 71% / 81.0% for Python / JavaScript: the results are close to each other.",
"In the variable misuse task, we achieved joint accuracy of 50.2% while Hellendoorn et al. (2020) report 44.4% (Python, JavaScript was not reported in the paper).",
"Our result is higher, since we use 1500 hidden units while Hellendoorn et al. (2020) uses 256 hidden units.",
"In addition, we use different preprocessing and different synthetically generated bugs.",
"In this work, we presented dynamic embeddings, a new approach for capturing the semantics of the variables in code processing tasks.",
"The proposed approach could be used in any recurrent architecture.",
"We incorporated dynamic embeddings in the RNN-based models in two tasks, namely code completion and variable misuse detection, and showed that using the proposed dynamic embeddings improves quality in both full data setting and the anonymized setting, when all user-defined identifiers are removed from the data.",
"I would like to thank Sergey Troshin, Irina Sapa-rina, and the anonymous reviewers for the valuable feedback.",
"The results for the anonymized setting presented in section 5.1 were supported by Samsung Research, Samsung Electronics.",
"The results for the full data setting presented in section 5.2 were supported by the Russian Science Foundation grant 19-71-30020.",
"The research was supported in part through the computational resources of HPC facilities at NRU HSE."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"objective",
"other",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Several natural language processing (NLP) tasks are defined as a classification problem in its most complex form: Multi-label Hierarchical Extreme classification, in which items may be associated with multiple classes from a set of thousands of possible classes organized in a hierarchy and with a highly unbalanced distribution both in terms of class frequency and the number of labels per item.",
"We analyze the state of the art of evaluation metrics based on a set of formal properties and we define an information theoretic based metric inspired by the Information Contrast Model (ICM).",
"Experiments on synthetic data and a case study on real data show the suitability of the ICM for such scenarios.",
"Many natural language processing (NLP) problems involve classification, such as sentiment analysis, entity linking, etc.",
"However, the adequacy of evaluation metrics is still an open problem.",
"Different metrics such as Accuracy, F-measure or Macro Average Accuracy (MAAC) may differ substantially, seriously affecting the system optimization process.",
"For example, assigning all elements to the majority class may be very effective according to Accuracy and score low according to MAAC.",
"In addition, in many scenarios such as tagging in social networks (Coope et al., 2018) or topic identification (Yu et al., 2019), the classifier must assign several labels to each item (multi-label clas-sification).",
"This greatly complicates the evaluation problem since, in addition to the class specificity (frequency), other variables appears such as the distribution of labels per item in the gold standard, the excess or absence of labels in the system output, etc.",
"The evaluation problem becomes even more complicated if we consider hierarchical category structures, which are very common in NLP.",
"For example, toxic messages are divided into different types of toxicity (Fortuna et al., 2019), named entities could be organized in nested categories (Sekine and Nobata, 2004), etc.",
"In these scenarios, the category proximity in the hierarchical structure is an additional variable.",
"Even, the problem can be further complicated.",
"Extreme Classification scenarios address with thousands of highly unbalanced categories (Gupta et al., 2019), where a few categories are very frequent and others completely infrequent (Almagro et al., 2020).",
"In addition, some items have no category at all and some have many.",
"An example scenario that we will use as a case study in this article is the labelling of adverse events in medical documents.",
"In this paper, we analyse the state of the art on metrics for multi-label, hierarchical and extreme classification problems.",
"We characterize existing metrics by means of a set of formal properties.",
"The analysis shows that different metric families satisfy different properties, and that satisfying all of them at the same time is not straightforward.",
"Then, propose an information-theoretic based metric inspired by the Information Contrast Model similarity measure (ICM), which can be particularized to simpler scenarios (e.g. flat, single labeled) while keeping its formal properties.",
"Later, we define a set of five tests on synthetic data to compare empirically ICM against existing metrics.",
"Finally, we explore a case study with real data which shows the suitability of ICM for such extreme scenarios.",
"The paper ends with some conclusions and future work.",
"In this section, we analyze the literature on the two main evaluation problems tackled in this paper: multi-labeling and class hierarchies, keeping the focus on extreme scenarios (numerous and unbalanced classes).",
"There are three main ways of generalizing effectiveness metrics to the multi-label scenario (Zhang and Zhou, 2014).",
"The first one consists in modeling the problem as a ranking task, i.e. the system returns an ordered label list for each item according to their suitability.",
"Some specific ranking metrics applied in multi-label classification displayed in (Wu and Zhou, 2017) are: Ranking Loss , which is a ordinal correlation measure, one-error which is based on Precision at 1, or Average Precision .",
"Although these metrics are very common, they do not take into account the specificity of (unbalanced) classes.",
"Jain et al. proposed the propensity versions of ranking metrics (Precision@k, nDCG) in order to weight classes according to their frequency in the data set (Jain et al., 2016).",
"Reducing the classification to a ranking problem is specially appropriate in extreme classification scenarios and simplifies the definition of metrics.",
"However, it also has several disadvantages.",
"First, it requires the output of the classifier to be in ranking format, and that does not fit many scenarios.",
"For example, annotating posts in social networks requires predicting the amount of tags to be assigned to the post.",
"For this reason, we focus on classification outputs, so ranking based metrics are out of our scope.",
"Apart from ranking metrics, multi-label effectiveness metrics have been categorized into label-and example-based metrics (Tsoumakas et al., 2010; Zhang and Zhou, 2014).",
"Label-based evaluation measures assess and average the predictive performance for each category as a binary classification problem, where the negative category corresponds with the other categories.",
"The most popular are the label-based Accuracy (LB-ACC) and F-measure (LB-F) 1 .",
"The label-based metrics have some drawbacks.",
"First, they do not consider the distribution of labels per item.",
"Hits are rewarded independently of how many labels are associated 1 In the single label scenario, the label-based F-measure converges to the traditional F and the label-based accuracy is proportional to the traditional ACC.",
"to the item.",
"Second, while items are supposed to be random samples, classes are not, so the idea of averaging results across classes is not always consistent.",
"That is, the metric scores can vary substantially depending on how the category space is configured.",
"Finally, if there are a large number of possible categories (extreme classification), the score contribution of any label has an upper limit of 1 |C| , being C the set of categories.",
"This limit can be problematic, specially when labels are unbalanced and numerous.",
"On the other hand, the example-based metrics compute for each object, the proximity between predicted and true label sets ( s ( d ) = { c s 1 ,",
".., c sn } and g ( d ) = { c g 1 ,",
".., c gn } ).",
"Some popular ways to match category sets in multi-label classification evaluation are the Jaccard similarity (EB-JACC) which is computed as | s ( d ) g ( d ) | | s ( d ) g ( d ) | (Godbole and Sarawagi, 2004), or the precision (cid:16) | s ( d ) g ( d ) | | s ( d ) | (cid:17) , recall (cid:16) | s ( d ) g ( d ) | | g ( d ) | (cid:17) and their F combination (EB-F).",
"Another example-based metric is the Hamming Loss (EB-HAMM) (Zhang et al., 2006) which matching function is defined as: | s ( d )XOR g ( d ) | | C g | where C g represents the set of categories annotated in the gold standard.",
"Subset Accuracy (EB-SUBACC) (Ghamrawi and McCallum, 2005) is a more strict measure due to it requires exact matching between both category sets.",
"Notice that all example-based multi-label metrics converge to Accuracy in the single-label scenario.",
"On the other hand, there are some situations in which these metrics are undefined.",
"If both the gold standard and the system output label sets are empty, the maximum score is usually assigned to the item.",
"The main drawback of these approaches is that they do not take into account the specificity of classes (i.e. unbalanced classes in extreme clas-sification).",
"The label propensity applied over precision and recall for single items can solve this lack.",
"Each accurate class in the intersection is weighted according to the class propensity p c (Jain et al., 2016): Prop P ( i ) = (cid:80) c s ( i ) g ( i ) 1 p c | s ( i ) | Prop R ( i ) = (cid:80) c s ( i ) g ( i ) 1 p c | g ( i ) | The propensity factor p c for each class is computed as: p c = 1 1+ Ce A log2( Nc + B ) where N c is 5810 the number of data points annotated with label c in the observed ground truth data set of size N and A , B are application specific parameters and C = ( logN 1)( B + 1) A .",
"In our experiments, we set the recommended parameter values A = 0 .",
"55 and B = 1 .",
"5 .",
"However, propensity precision and recall values are not upper bounded as 1 p c tends to infinite when p c tends to zero.",
"In order to solve this issue, in our experiments we replace the normalization factors | s ( i ) | and | g ( i ) | with the accumulation of inverse propensities in the system output or the gold standard.",
"We also add the empty class c in both the system output and the gold standard in order to capture the specificity of classes in the mono-label scenario: Prop P ( i ) = (cid:80) c s (cid:48) ( i ) g (cid:48) ( i ) 1 p c (cid:80) c s (cid:48) ( i ) 1 p c Prop R ( i ) = (cid:80) c s (cid:48) ( i ) g (cid:48) ( i ) 1 p c (cid:80) c g (cid:48) ( i ) 1 p c where s (cid:48) ( i ) = s ( i ) { c } and g (cid:48) ( i ) = g ( i ) { c } .",
"Propensity F-measure (PROP-F) is computed as the harmonic mean of these values.",
"Kosmopoulos et al., 2013).",
"Kosmopoulos et al. distinguish between pair and set-based metrics.",
"Pair-based metrics weight hits or misses according to the distance between categories in the hierarchy.",
"This distance depends on the number of intermediate nodes (Wang et al., 1999; Sun and Lim, 2001), with the disadvantage that the specificity of the categories is not taken into account.",
"Depth-based distance metrics include the class depth in the metric (Blockeel et al., 2002).",
"However, the depth of the node is not sufficient to model its specificity since depending on their frequency, leaf nodes at the first levels may be more specific than leaf nodes at deeper levels.",
"It is possible to compare the predicted and true single labels by means of standard ontological similarity measures such as Leackock and Chodorow (path-based) (Leacock and Chodorow, 1998), Wu and Palmer (Wu and Palmer, 1994), Resnik (depth-based) (Resnik, 1999), Jiang and Conrath (Jiang and Conrath, 1997) or Lin (Lin, 1998) similarities.",
"The last two are based on the notion of Information Content (IC) or category specificity, i.e., the amount of items belonging to the category or any of its descendants.",
"However, extending pair-based hierarchical metrics to the multi-label scenario is not straightforward.",
"Sun and Lim extended Accuracy, Precision and Recall measures for ontological distance based metrics (Sun and Lim, 2001).",
"This method has two drawbacks.",
"First, it requires defining a neutral hierarchical distance, i.e., an acceptable distance threshold for range normalization purposes.",
"The second drawback is that it inherits the weaknesses of label-based metrics (see previous section).",
"Bloc-keel et al. proposed computing a kernel and thus define a Euclidean distance metric between sums of class values (Blockeel et al., 2002).",
"The drawback is that they assume a previously defined distance metric between categories and the origin and between different categories.",
"Information based ontological similarity measures such as Jiang and Conrath or Lin's similarity do not have an upper bound which is necessary for the calculation of accuracy and coverage.",
"On the other hand, set-based metrics (also called hierarchical-based) consider the ancestor overlap (Kiritchenko et al., 2004; Costa et al., 2007).",
"More concretely, hierarchical precision and recall are computed as the intersection of ancestor divided by the amount of ancestors of the system output category and of the gold standard respectively 2 .",
"Their combination is the Hierarchical F-measure (HF).",
"Since these metrics are based on category set overlap, they can be applied as example based multi-label classification by joining ancestors and computing the F measure.",
"Their drawback is that the specificity of categories is not strictly captured since they assume a correspondence between specificity and hierarchical deepness.",
"However, this correspondence is not necessarily true.",
"Categories in first levels can be infrequent whereas leaf categories can be very common in the data set.",
"In this paper, we propose an information theoretic similarity measure called Information Contrast Model (ICM).",
"ICM is an example-based metric as it is computed per item.",
"Just like HF, ICM is a set-based multi-label metric as it computes the similarity between category sets.",
"Unlike HF, ICM takes into account the statistical specificity of categories.",
"2 In our experiments, when computing the ancestor overlap we consider the common empty label (root class) in order to avoid undefined situations 5811 3 Formal Properties In order to define the set of desirable properties, we formalize both the gold standard g and the system output s as sets of item/category assignments ( i, c ) I C , where I and C represent the set of items and categories respectively.",
"We will denote as P ( c j ) the probability of items to be classified as c j in the gold standard ( P (( i, c j ) g | i I )) .",
"We also assume that the categories in the hierarchical structure are subsumed.",
"For instance, items in a PERSON_NAMED_ENTITY category are implicitly labeled with the parent category NAMED_ENTITY .",
"The common ancestor with maximum depth is denoted as lso ( c 1 , c 2 ) and the descendant categories are denoted as Desc ( c ) including itself.",
"Note that we do not claim that all properties are necessary in any scenario.",
"The purpose of this article is to provide at least one metric that is capable of capturing all aspects simultaneously when necessary.",
"The first property is related to hits.",
"In order to make this aspect independent from the ability of the metrics to capture hierarchical relationships or multi-labeling, we define monotonicity over hits in the simplest case (flat single label scenario): Property 1 [Strict Monotonicity] A hit increases effectiveness.",
"Given a flat single label category structure, if ( i, c ) g \\ s , then 3 Eff ( s { ( i, c ) } ) > Eff ( s ) The next two properties state that the specificity of both the predicted and the true category affects the metric score.",
"That is, an error or a hit in an infrequent category should have more effect than in the majority category.",
"For instance, identifying a rare symptom in a medical report should be rewarded more than identifying a common malady present in the vast majority of patients.",
"In addition, both the specificity of the actual category and the specificity of the category predicted by the system must be taken into account.",
"Again, we make this aspect independent of hierarchical structures and multi-labeling.",
"Property 2 [True Category Specificity] Given a flat single label category distribution, if P ( c 1 ) < P ( c 2 ) and ( i, c 1 ) , ( i, c 2 ) g \\ s , then Eff ( s { ( i, c 1 ) } ) > Eff ( s { ( i, c 2 ) } ) .",
"3 Notice that x X \\ Y x X x / YP",
"The following property captures the effect of the hierarchical category structure.",
"A common element of any hierarchical proximity measure is that it is monotonic with respect to the common ancestor.",
"That is, our brother is always closer to us than our cousin, regardless of which family proximity criterion is applied.In this property we do not consider multi-labelling.",
"Property 4 [Hierarchical Proximity] Under equiprobable categories ( P ( c 1 ) = P ( c 2 ) = P ( c 3 )) , the deepness of the common ancestor affects similarity.",
"Given a single label hierarchical category structure, if s ( i ) = , g ( i ) = c 1 and lso ( c 1 , c 2 ) Desc ( lso ( c 1 , c 3 )) then Eff ( s { ( i, c 2 ) } ) > Eff ( s { ( i, c 3 ) } ) .",
"The last two properties are related with the multi-labeling problem.",
"Property 5 rewards the amount of predicted categories per item.",
"Property 5 [Multi-label Monotonicity] The amount of predicted categories increases effectiveness.",
"Given a flat multi-label category structure, if ( i, c ) g \\ s , then Eff ( s { ( i, c ) } ) > Eff ( s ) Property 6 rewards hits on multiple items regarding a single item with multiple categories.",
"To understand the motivation for this property, we can consider an extreme case.",
"Identifying 1000 symptoms in one patient report is of less health benefit than identifying one symptom in 1000 patients.",
"Property 6 [Label vs. Item Quantity] n hits on different items are more beneficial than n labels assigned to one item.",
"Given a flat multi-label category distribution, if j = 1",
"..n (( j, c j ) g \\ s ) and j = 1",
"..n, i > n (( i, c j ) g \\ s ) then Eff ( s { (1 , c 1 ) ,",
".., ( n, c n ) } ) > Eff ( s { ( i, c 1 ) ,",
".., ( i, c n ) } ) .",
"In this section, we analyze existing metrics on the basis of the proposed formal properties (Table 1).",
"Most of metrics satisfy Strict Monotonicity in single label scenarios.",
"The label-based metric LB-F captures the true and wrong category specificity via the recall component.",
"The example-based metric PROP-F (modified as described in Section",
"2) captures these properties via the propensity factor.",
"Notice that the original propensity F-measure does not capture the wrong category specificity (Prop-erty",
"3) given that the p c factor is applied only to 5812 Table 1: Metric and Formal Properties Family Metrics Constraints Strict True Wrong Hierarchical Multi-label Label vs. Monotonicity Category Category Proximity Monotonicity Item Specificity Specificity Quantity LabelBased Accuracy (LB-ACC) (cid:51) --(cid:51) F measure (LB-F) (cid:51) (cid:51) (cid:51) -(cid:51) ExampleBased Jaccard (EB-JACC) (cid:51) --(cid:51) (cid:51) Hamming (EB-HAMM) (cid:51) --(cid:51) Subset Acc.",
"hits.",
"In addition, both kind of metrics do not capture hierarchical structures.",
"The contribution of example regarding label-based metrics is that, as label-based metrics are computed item by item, the property Label vs. Item Quantity is satisfied (Prop-erty 6).",
"The exception is EB-HAMM which does not normalize the results with respect to the amount of labels assigned to the item.",
"Unlike previous metrics, the set based F-measure (HF) captures the hierarchical structure (Property 4).",
"However, it does not capture the category specificity (properties 2 and 3).",
"Some information-based ontological similarity measures, (Lin and Jiang & Conrath) capture both the category specificity and the hierarchical structure.",
"However, they are not defined for multi-label classification (properties 5 and 6).",
"In sum, different metric families satisfy different properties, and that satisfying all of them at the same time is not straightforward.",
"The properties of ICM are described in the next section.",
"The Information Contrast Model (ICM) is a similarity measure that unifies measures based on both object feature sets and Information Theory (Amig et al., 2020).",
"Given two feature sets A and B , ICM is computed as: ICM( A, B ) = 1 IC ( A )+ 2 IC ( B ) IC ( A B ) Where IC ( A ) represents the information content ( log ( P ( A )) of the feature set A .",
"In our scenario, objects are items to be classified and features are categories.",
"The intuition is that the more the category sets are unlikely to occur simultaneously (large IC ( A B ) ), the less they are similar.",
"Given a fixed joint IC, the more the category sets are specific ( IC ( A ) and IC ( B ) ), the more they are similar.",
"ICM is grounded on similarity axioms supported by the literature in both information access and cognitive sciences.",
"In addition, it generalizes the Pointwise Mutual Information and the Tver-sky's linear contrast model (Amig et al., 2020).",
"The IC of a single category corresponds with the probability of items to appear in the category or any of its descendant.",
"It can be estimated as follows: IC ( c ) = log 2 ( P ( c )) (cid:39) log 2 (cid:32)(cid:12) (cid:12) (cid:83) c (cid:48) { c } Desc( c ) I c (cid:48) (cid:12) (cid:12) (cid:12)(cid:12) (cid:83) c (cid:48) C I c (cid:48) (cid:12)(cid:12) (cid:33) where I c (cid:48) represent the set of items assigned to the category c (cid:48) and Desc( c ) represents the set of descendant categories.",
"In order to estimate the IC of category set, we state the following considerations.",
"The first one is that, given two categories A and B the common ancestor represents their intersection in terms of feature sets: { c i } { c j } = lso ( c i , c j ) (1) The second consideration is that we assume Information Additivity , i.e. the IC of the union of two 5813 sets is the sum of their IC's minus the IC of its intersection: IC ( { c i }{ c j } ) = IC ( c i )+ IC ( c j ) I ( { c i }{ c j } ) (2) Equations 1 and 2 are enough to compute ICM in the single label scenario.",
"Generalizing for category sets: IC ( { c 1 , c 2 , .., c n } ) = IC (cid:32)(cid:91) i { c i } (cid:33) = IC ( c 1 ) + IC ( { c 2 , .., c n } ) IC ( { c 1 } { c 2 , .., c n } ) where, according to the transitivity property; { c 1 } { c 2 ,",
".., c n } = (cid:91) i =2",
"..n ( { c 1 } { c i } ) and according to Equation 1, it is equivalent to (cid:83) i =2",
"..n { lso ( c 1 , c i ) } .",
"Then, we finally obtain a recursive function to compute the IC of a category set: IC ( { c 1 , c 2 , .., c n } ) = IC ( c 1 ) + IC (cid:32) (cid:91) i =2",
"In the case of ICM, it is possible the need for estimating the IC of classes that do not appear in the gold standard.",
"Therefore, we have not evidence about its frequency or probability.",
"We apply a smoothing approach by considering the minimum probability 1 |I| .",
"On the basis of five general similarity axioms, in (Amig et al., 2020) it is stated that the ICM parameters should satisfy 1 , 2 < < 1 + 2 .",
"We propose the parameter values 1 = 2 = 2 an = 3 .",
"This parameterization leads to the following instantiations for each particular classification scenario.",
"In the hierarchical mono-label scenario, it becomes into (equations 1 and 2): ICM( c 1 , c 2 ) = IC ( c 1 ) IC ( c 2 ) + 3 IC ( lso ( c 1 , c 2 )) (3) which is similar to the Jiang and Conrath ontological similarity measure.",
"which is an information additive example-based metric.",
"That is, the information content of the common categories minus the differences.",
"Finally, in the traditional flat mono-label scenario, it becomes into: ICM( c 1 , c 2 ) (cid:39) (cid:40) IC ( c 1 ) if c 1 = c 2 IC ( c 1 ) IC ( c 2 ) i.o.c. (5) which corresponds with Accuracy weighted according to the information content of categories.",
"According to the flat mono-label instantiation (Equation",
"5) ICM 1 = 2 =2 , =3 satisfies the properties 1 2 and 3.",
"According to the single label hierarchical instantiation (Equation",
"3) Property 4 is satisfied.",
"According to the flat multi-label instantiation (Equation 4), Property 5 is satisfied.",
"Unfortunately, the label vs item quantity property is not strictly satisfied given that the gain per hit is additive in non hierarchical scenarios (Property 6).",
"However, in the experiments we will see that the hit gain on items with many categories is smoothed out if the categories are related to each other by a hierarchical structure.",
"Different evaluation aspects such as error rate, category specificity, hierarchical structures, etc., may have more or less weight depending on the scenario.",
"These aspects correspond to the formal properties defined in the previous section.",
"We perform a set of tests in order to quantify the suitability of metrics with respect to each property or evaluation aspect.",
"First, we generate the following synthetic data set.",
"First, we definea hierarchical structure structure of 700 categories exposed in Figure 1.",
"Note that categories { 1",
"..",
"10 } are parent categories spread throughout the hierarchy, and categories { 11",
"..",
"700 } are leaf categories.",
"Secondly, We distributed 100 items across all categories.",
"We generate assignments for each pair item/category ( i, c ) with a probability of p i p c where p i = max (cid:0) 51 i 2225 , 1 2225 (cid:1) with i = 1",
"..",
"1000 and p c = max ( 512 c , 1 ) 1713 where c = 1",
"..",
"700 .",
"We repeat this 1000 times.",
"The result is a distribution (300 , 150 , 40 , .., 0 . 6 , 0 . 6) items per category and (22 . 5 , 22 , 21 . 6 , 21 . 1 , ..., 0 . 5 , 0 . 5) labels per item.",
"The purpose is to ensure unbalanced assignments across items and classes.",
"We generate 1000 gold standards by reordering the category identifiers c each time in the p c computation in order to alter the distribution of items in the hierarchical structure.",
"We consider in this experiment the metrics labelbased Accuracy and F-measure (LB-ACC and LB-F), the example-based metrics Hamming (EB-HAMM), Jaccard (EB-JACC), Subset Accuracy (EB-SUBACC), F-measure (EB-F) and Propensity F-measure (PROP-F), the Hierarchical F-measure (HF) and ICM.",
"The ontological similarity metrics are discarded given that they are not defined for the multi-label case.",
"Ranking based metrics are discarded as the synthetic data set does not include graded assignments.",
"After this, we perform the following tests by comparing two noisy versions of the gold standard.",
"The test result is the percentage of cases in which the hypothetically worse noised output is outscored by the best noised output (Table 2).",
"Ties count 0.5.",
"In the first experiment referred in Table 2 as Sensitivity to Error Rate , We ran an error insertion procedure 1000 times on the goldstandard, with a probability of 0.09 and 0.1 for the best and worst output respectively.",
"On average we will have 9 and 10 errors respectively.",
"Each error consists of randomly choosing one of the 1000 assignments ( i, c ) of the goldstandard and removing it.",
"For all metrics the best output outperforms the worst output in more that 50% of cases.",
"LB-ACC and EB-HAMM seems to be specially sensitive to the error rate.",
"This is due to the fact that they do not consider other aspects such as the category specificity or the hierarchical proximity.",
"Surprisingly, ICM achieves a relatively high error rate sensitivity although it also consider other aspects.",
"We do not have a clear explanation for this.",
"The second experiment is the True Category Specificity test.",
"The intuition is that a gap in a frequent category should have less effect than a gap in an infrequent category.",
"With an error rate of 0.05, for the best output, we remove a single label assignment randomly selected from all the goldstandard.",
"For the worst output, we first select randomly a category and then we remove an assignment from this category.",
"The result is that the best output tends to concentrate the gaps in frequent categories to a greater extent than the worst output.",
"At the table shows, the metrics that satisfy the corresponding property achieve high scores (LB-F, PROP-F and ICM).",
"The third experiment is the Wrong Category Specificity test.",
"The intuition is that a wrong assignment in a frequent category should have less effect than a wrong assignment in an infrequent category.",
"With an error rate of 0.05, we select an 5815 Table 3: Experimental results over real data.",
"assignment ( i, c ) randomly from items with a single label.",
"For the best output we replace c with the most frequent class different than c .",
"For the worst output, we replace c with a randomly selected category different than c .",
"We obtain the same result than in the previous experiment.",
"The fourth experiment is the Hierarchical Similarity test.",
"The intuition is that the more a wrong assignment is far away from the correct category, the more it has effect in the effectiveness score.",
"Again, with an error rate of 0.05, we select an assignment ( i, c ) randomly from single labeled items with leaf categories.",
"For the best output we replace c with a sister wrong category.",
"For the worst output, we replace c with a randomly selected wrong category.",
"Again, the metrics that satisfy the corresponding property achieve high scores.",
"The last test is Item Specificity .",
"The intuition is that a wrong assignment in an item with many labels should have more effect than an error in an item with one or a few labels.",
"For the best output, for each error insertion iteration, we randomly select an assignment ( i, c ) (with the same error rate 0.05).",
"For the worst output, we randomly select an item i , and we take one of its assignments ( i, c ) .",
"In both cases, the category is replaced with a randomly selected wrong label.",
"In other words, we distribute errors uniformly across item/category assignments in the best output and we distribute errors uniformly across items in the worst output.",
"The effect is that the best output concentrates errors in items with many labels.",
"Again, those metrics that satisfy the corresponding metric achieve high performance.",
"The label-based F-measure tends to reward the worst output.",
"The reason is that items with many labels tend to concentrate diverse labels.",
"Therefore, the label-based F measure penalizes the best output.",
"As discussed in the previous section, although ICM does not satisfy the property, the hit gain on items with many categories is smoothed out if the categories are related to each other by a hierarchical structure.",
"The problem addressed is the automatic encoding of discharge reports (Dermouche et al., 2016; Bampa and Dalianis, 2020) from a Spanish hospital to detect adverse events (AEs) from CIE-10-ES 4 , the Spanish version of the tenth revision of the International Classification of Diseases (ICD-10).",
"AEs detection fits to the scenario tackled in this article due to the following reasons:",
"(i) Extreme : CIE-10-ES contains 4816 codes related to AEs, which probability follows a power-law distribution since most of them rarely appear in health records or even they do not appear;",
"(ii) Hierarchical : CIE-10-ES is a hierarchy with six levels: an empty root ( c such that IC ( c ) = 0 ), and then a level composed by three-character-codes categories which can be divided into successive nested subcategories adding characters until seven-character-codes at most; and",
"(iii) Multi-label classification : Each discharge report could have associated with several AEs codes.",
"We have used a corpus composed of 36264 real anonymized discharge reports (Almagro et al., 2020) annotated with AEs codes by experts.",
"The corpus has been divided into three data sets, training, development and test, following the proportion 50%-30%-20% respectively.",
"The corpus includes only 671 AEs codes of 4816 and 84% of the discharge reports have no AEs, so the data is highly biased and unbalanced.",
"4 https://eciemaps.mscbs.gob.es/ecieMaps/ 5816 We have applied five simple baselines in order to analyze the behaviour of the metrics:",
"(i) ALL NONE does not assign any code to each item;",
"(ii) MOST FREQ.",
"assigns the most frequent AE code in the training data set (T45.1X5A) to each item, which just appears in 68 items of 7253;",
"(iii) MATCH 75% divides each item into sentences and assigns a code if a sentence contains 75% of the words of the code description avoiding stop-words;",
"(iv) SVM DESCR.",
"creates a binary classifier for each AE code in the training set using the presence of words of the AEs codes descriptions in the items as features, excepting stop-words;",
"(v) SVM CODES : similar to the previous one but using as features the annotated non-AEs codes in order to check if AEs codes are related to non-AEs codes.",
"Note that MATCH 75% is able to assign any AE, but the SVM baselines are only able to assign AEs appearing in the training data set.",
"Table 3 shows the metrics results obtained by each baseline.",
"Unfortunately, with only five systems it is difficult to find differences in terms of system ranking.",
"Therefore, we have normalised the values for each metric between the maximum and the minimum obtained across the 5 systems in order to study the relative differences of scores (values in brackets).",
"LB-ACC, LB-F and EB-HAMM reward the absence of most of the labels in the corpus, so they are not suitable in this scenario.",
"The rest of the metrics sort systems in the same way.",
"The particularity of ICM is that, as shows the normalized results, the baseline MATCH 75% is penalized with respect to ALL NONE to a greater extent than in other metrics, since MATCH 75% assigns many codes incorrectly, whereas ALL NONE does not provide any information.",
"Another slight particularity of ICM is that the system SVM CODES is rewarded against the rest of baselines to a greater extent.",
"Notice that SVM CODES achieves 269 hits while SVM DESCR achieves 77 hits.",
"The definition of evaluation metrics is an open problem for extreme hierarchical multi-label classification scenarios due to the role of several variables, for instance, a huge number of labels, unbalanced and biased label and item distributions, proximity between classes into the hierarchy, etc.",
"Our formal analysis shows that metrics from different families (label, example, set-based, ontological similarity measures etc.) satisfy different properties and capture different evaluation aspects.",
"The information-theoretic metric ICM proposed in this paper, combines strengths from different families.",
"Just like example-based multi-label metrics, it computes scores by items.",
"Just like set-based metrics, it compares hierarchical category sets.",
"Just like some ontological similarity measures (Lin or Jiang and Conrath), it considers the specificity of categories in terms of Information Content.",
"Our experiments using synthetic and real data show the suitability of ICM with respect to existing metrics.",
"ICM does not strictly hold the label vs. item quantity property.",
"We propose to adapt ICM in order to guarantee all the formal properties as future work.",
"Research cooperation between UNED and the Spanish Ministry of Economy and Competitiveness, ref.",
"C039/21-OT and in the framework of DOTT-HEALTH project (MCI/AEI/FEDER, UE) under Grant PID2019-106942RB-C32."
] | [
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"other",
"other"
] |
[
"Previous length-controllable summarization models mostly control lengths at the decoding stage, whereas the encoding or the selection of information from the source document is not sensitive to the designed length.",
"They also tend to generate summaries as long as those in the training data.",
"In this paper, we propose a length-aware attention mechanism (LAAM) to adapt the encoding of the source based on the desired length.",
"Our approach works by training LAAM on a summary length balanced dataset built from the original training data, and then fine-tuning as usual.",
"Results show that this approach is effective in generating high-quality summaries with desired lengths and even those short lengths never seen in the original training set.",
"Abstractive summarization (Nallapati et al., 2016; See et al., 2017; elikyilmaz et al., 2018; Dong et al., 2019; Lewis et al., 2020; Liu et al., 2021; Dou et al., 2021) aims at reproducing the semantics and topics of the original text in a concise and fluent summary by paraphrasing.",
"In order to display the summary on different mobile devices or websites with space limitations, we have to produce summaries in different lengths.",
"Length-controllable summarization is a multi-objective optimization problem, including generating complete summaries within desired lengths and selecting proper information to summarize based on desired lengths.",
"The existing length-controllable summarization based on encoder-decoder models can be divided into two categories: (1) early-stop during decoding and (2) information selection before encoding .",
"Early-stop during decoding methods (Kikuchi et al., 2016; Liu et al., 2018; Makino et al., 2019; Kenny Q. Zhu is the corresponding author, and is partially supported by NSFC Grant No. 91646205, and SJTU-CMBCC Joint Research Scheme. Source Document ... iranians erupted in celebration as young people waved flags from their sunroofs , blasted music from stereos and chatted online with the hashtag # irantalks . the excitement came after a breakthrough nuclear deal with the united states and other world powers ... Length Reference Summary 10 iranians celebrate the deal online and in the streets . 30 after a breakthrough nuclear agreement deal with the united states and other world powers , celebration broke out in iranians . young people waved flages and chatted online . Table 1: The reference summaries of one source document with lengths as 10 and 30. Yu et al., 2021) focus on when to output eos (end of sequence), indicating the end of the summary.",
"An ad-hoc method (Rush et al., 2015) generates the eos by assigning a score of to all candidate words at the position of the desired length during test.",
"Ad-hoc can be applied to any seq2seq model.",
"Others learn the relationship between length and the decoder state at training time.",
"However, these methods simply add length requirements to the decoder and ignore the fact that encoding the content, or the information selection, from the source document must also adapt to different length requirements.",
"Table 1 gives an example.",
"The content of the reference summary with 10 tokens is the celebration of iranians.",
"The reference summary with 30 tokens contains the reason for the celebration.",
"Some generated summaries with short desired lengths are likely to be incomplete, similar to the truncated version of summaries generated by models without length constraints.",
"The summaries of ad-hoc and LenAtten in Table 2 are not complete and lose the information about deal.",
"Generated Summaries (Desired Length = 10) BART (Lewis et al., 2020) + Ad-hoc (Rush et al., 2015) (10 tokens) iranians erupted in celebration as young people waved flags from LenAtten (Yu et al., 2021) (12 tokens) the agreement on the final day of persian new year festivities , LPAS (Saito et al., 2020) (22 tokens) iranians erupted in celebration .",
"the excitement came after a breakthrough nuclear deal with the united states and other world powers .",
"Methods based on information selection are two-6885 stage methods (See et al., 2017; Sarkhel et al., 2020; Saito et al., 2020).",
"One prominent example is LPAS (Saito et al., 2020), which in the first stage, extracts top l most important tokens from the source document as a prototype summary where l is the desired length, and in the second stage encodes the original source document and prototype summary by a dual-encoder.",
"On the one hand, such two-stage approaches suffer from noises introduced in the intermediate results.",
"On the other hand, the second stage of these methods does not have first-hand length information, which weakens the length control.",
"Table 2 shows that LPAS contains redundant information about deal and its length is much longer than the reference summary.",
"In this paper, we propose a length-aware attention mechanism (LAAM) which extends a transformer seq2seq model with the ability to select information in the context according to the length constraint.",
"LAAM re-normalizes the attention between encoder and decoder to boost the tokens with higher attention scores based on the desired length, helping with selecting length-aware information from source document.",
"The number of boosted tokens decreases step by step until eos gets the highest attention score, which is helpful in stopping the decoding process at desired length.",
"LAAM can be thought of as a hybrid approach between the two types of previous approaches.",
"We observe that there is a big difference in the number of summaries within different length ranges in the original training set in any summarization dataset.",
"The shorter reference summaries are especially rare.",
"As shown in Table 1, given a short desired length, the summaries of the previous methods and LAAM still select redundant information.",
"To balance the distribution of summaries in different length ranges, we propose a heuristics to create a length-balanced dataset (LBD) by pre-predefining the length ranges and constructing extractive summaries within different length ranges, which helps model to select different information from source document via desired lengths.",
"In our approach, we can create an LBD from original summarization dataset.",
"We first train LAAM on such LBD to enhance the ability of LAAM on information selection with length constraints.",
"Then we fine-tune the pretrained LAAM on original dataset to learn to paraphrase the selected information as abstractive summaries in different lengths.",
"The task of generating short summaries by the models fine-tuned on datasets without short reference summaries can be seen as a zero-shot problem.",
"Benefiting from the pretraining with LBD, our approach can solve the zero-shot length control problem.",
"Our contributions are as follows: 1. We propose a new length-aware attention mechanism (LAAM) to generate high-quality summaries with desired length.",
"LAAM outperforms the state-of-the-art length-controllable methods on CNN/Daily Mail and XSUM in terms of ROUGE scores, length variance and human evaluation (Table 5).",
"2. We design a heuristics to create a length-balanced dataset (LBD) from original dataset.",
"After pretraining LAAM on LBD, the pretrained LAAM performs better than LAAM and can effectively solve the zero-shot length control problem (Table 10).",
"In this section, we first introduce the length-controllable summarization (LCS) problem, then introduce the length-aware attention mechanism (LAAM), which attends the existing transformer seq2seq models, and finally explain how to create a length-balanced dataset (LBD) for pretraining.",
"In LCS, the model takes the source document x = ( x 0 , x 1 , ..., x m ) and the desired length l as input and the summary y = ( y 0 , y 1 , ..., y n ) as output.",
"x i is the i th token of document and y t is the t th token of summary.",
"x m and y n are eos tokens.",
"The goal is to estimate the conditional probability p ( y | x ) : p ( y | x , l )= n (cid:89) t p ( y t | y 1 , y 2 , ..., y t 1 , x , l ) (1) We take the transformer seq2seq model (Vaswani et al., 2017) as our basis.",
"Suppose that the encoder output is h = { h 0 , h 1 , ..., h m } , h R m d , and the output of the decoder's masked self-attention sub-layer is z = { z 0 , z 1 , ..., z n } , z R n d .",
"The normal cross attention is calculated as: A = softmax ( z h T ) (2) where A R n m is an attention matrix.",
"In the transformer seq2seq model, the cross attention of an output token y t is likely to summarize those tokens with high attention scores in the input (source document).",
"By formulating the cross attention as a function of the desired length l , we can manipulate the input information selection according to l .",
"This is the intuition behind LAAM, which is illustrated in Figure 1. Figure 1: Overview of LAAM on Transformer Seq2seq.",
"LAAM is made up of two parts: attention for input selection ( Attn is ) and attention for eos token ( Attn eos ), each optimized for information selection and length control , the two objectives in LCS.",
"Attn is .",
"At decoding, given the initial desired length l , l + 1 is the number of tokens in the output with eos , the remaining length budget ( l t ) decreases as more tokens are generated.",
"Specifically, at step t , l t = (cid:40) l + 1 t, 0 t l 1 , otherwise (3) Intuitively, at each decoding step, the decoder should plan its output y t given the remaining number l t of tokens it will generate.",
"Our key idea is to increase the attention scores of the top l t tokens with the highest attention scores in A t , which gives a boost to the chance of these tokens to be selected and summarized.",
"The interesting effect of this is that",
"i) the longer l , the more source information will be selected for summarization; and",
"ii) as the decoder generates more tokens,the number of tokens to be mainly attended in input decreases.",
"We use one-hot vector p = { p 0 , p 1 , ..., p m } to label the indices of the top l t tokens with the highest attention scores in A t as 1 and others as 0 , and then the length-aware attention score is computed as: a (cid:48) t,i = w t,i a t,i (4) w t,i = (cid:40) 1 , p i = 0 l t , p i = 1 (5) where w t,i is the weight for boosting the attention between x i and y t .",
"According to Eq.",
"(5), the weight for cross attention decreases as the remaining length decreases, resulting in a decrease in the gap between the enhanced tokens and other tokens.",
"This makes the model evenly attend to tokens related to the enhanced tokens and output general words to end the decoding.",
"The model can learn to select information to be summarized by desired length.",
"Attn eos .",
"At each decoding step t , to enhance the ability of model to generate eos at the desired length, we modify the attention score between y t and eos in source document x m as follows: a (cid:48) t,m = ( l + 1 l t ) a t,m (6) The length-aware attention of eos increases step by step, which demonstrates the probability of stopping decoding will increase as the length of the output close to the desired length.",
"Finally, we re-normalize the modified attention scores A (cid:48) t = (cid:8) a (cid:48) t, 0 , a (cid:48) t, 1 , ..., a (cid:48) t,m (cid:9) to get the context vector c t and compute the probability distribution of predicted tokens via: p ( y t | y i<t , x , l ) = softmax ( W c t 1 + b ) (7) c t = m (cid:88) 0 a t,i h i (8) a t,i = a (cid:48) t,i (cid:80) mi =0 a (cid:48) t,i (9) where W and b are trainable parameters.",
"Since the summary lengths of a training dataset may be highly concentrated in a small range (see Table 4), neural-based abstractive summarization models tend to select source information according to the summary lengths they have seen in training data and generate summaries with similar lengths.",
"In order to make the model learn to select proper information according to different desired lengths, we propose a heuristics to create a length-balanced dataset (LBD) by extracting summaries with various lengths from each document in original dataset 6887 and makeing lengths of these extractive summaries evenly distributed in different ranges.",
"Given an abstractive summarization dataset D , which consists of a training set T and a validation set V , we create the training set T (cid:48) and validation set V (cid:48) of LBD.",
"To create T (cid:48) , we set the discrete bins B = { b 1 , b 2 , ..., b k } to represent the ranges of summary length of T (cid:48) .",
"k is the number of the bins.",
"For example, B = { (0 , 10] , (10 , 20] , ... } and b 0 = (0 , 10] .",
"For each document src and its reference summary ref in T , we produce length-controllable pairs (LCPs) consisting of src and its extractive summaries in various length ranges.",
"Let e be the extractive summary of length b B .",
"We apply a greedy approach, where we add one sentence at a time incrementally to the e , until the length of e is within the proper range of b and has the highest ROUGE-1 (R-1) recall with respect to ref .",
"Generally, the more training data, the greater the impact on the model.",
"To make T (cid:48) effective, the number of samples in T (cid:48) should be close to | T | .",
"S ( b ) is the subset of T (cid:48) , including LCPs with extracted summaries with length in b .",
"We add top (cid:100)| T | /k (cid:101) extractive summaries (length b ) with the highest R-1 recall and their source documents to S ( b ) , which makes the summaries equally distributed in the bins or length ranges.",
"The details are in Algorithm 1. Algorithm 1 Creating Training Set of LBD Input : the training set T Output : the training set T (cid:48) 1: rec () computes the R-1 recall score between two texts.",
"For V (cid:48) , we create an extractive reference summary by selecting one sentence at a time until we get a subset of sentences from src that maximizes the R-1 F1 with respect to ref .",
"Given an original source document and reference summary pair, R-1 recall computes the similarity between extracted sentences and reference without considering the length of extracted sentences.",
"This meets our requirements for creating T (cid:48) , that is, we can extract multiple summaries within different length ranges for one document.",
"To evaluate the model at training, each document in V (cid:48) only needs one extractive summary.",
"R-1 F1 considers the difference between the lengths of compared summaries, which can select an extractive summary most similar to the reference in length and content.",
"In this paper, we first pretrain LAAM on LBD for the ability to select information from source document to be summarized according to length constraint.",
"Then we fine-tune the pretrained LAAM ( PtLAAM ) on original dataset.",
"At this stage, armed with the ability to select information from source document, the model further learns to paraphrase the selected information into abstractive summaries with desired length.",
"setup.",
"We design two experiments, general length control and zero-shot length control , to compare our approach with baselines.",
"1 General length control experiment trains and tests the models on the entire original dataset.",
"Zero-shot length control experiment tests the model on a subset of the test set whose summary lengths fall within a certain range, and trains the model on training data with summary lengths outside this range.",
"In each of the two experiments, we evaluate methods' ability to do length control and information selection .",
"We use two popular summarization datasets.",
"CN-N/Daily Mail (CNNDM) (Hermann et al., 2015) consists of pairs of a single source document and a multi-sentence summary.",
"The dataset includes 286,817 training pairs, 13,368 validation pairs and 11,487 test pairs.",
"XSUM (Narayan et al., 2018) is composed of article and single-sentence summary pairs.",
"The number of samples in training/valida-tion/test sets are 204,045/11,332/11,334.",
"plemented on top of BART 2 , because BART (Lewis et al., 2020) is one of the SOTA models in summarization, and it uses less memory and training time than its peers (Shleifer and Rush, 2020).",
"Exact is not a summarization model but is used here to achieve hard length control on any seq2seq models to produce summaries of exact lengths.",
"We follow Liu et al. (2018) and Saito et al. (2020) to segment datasets by different length ranges and set the discrete bins B of summary length ranges in Sec. 2.3.",
"The B of CNNDM is B c = { (0 , 10] , (10 , 30] , ..., (90 , + ) } and that of XSUM is B x = { (0 , 10] , (10 , 30] , (30 , + ) } .",
"3 B x has only 3 ranges as the summaries in XSUM are shorter.",
"In zero-shot length control experiments, test length ranges for CNNDM and XSUM is (0 , 30] and (0 , 10] , containing 488 and 176 samples respectively.",
"The length distribution of the datasets is in Table 4. During training, we set the lengths of gold summaries as desired lengths and take them as input.",
"During test, there are two different setups.",
"The gold length test (Saito et al., 2020) asks the models to generate summaries with desired lengths equal to the reference summaries.",
"The arbitrary length test asks the models to generate summaries with arbitrary lengths, regardless of the reference summary lengths.",
"The output lengths are set at 10, 30, 50, 70 and 90 for CNNDM and at 10, 30 and 50 for XSUM due to the latter's shorter summaries.",
"In each experiment, to evaluate the ability to control length, we do soft length control tests, which sets minlen and maxlen to 0 and 200 respectively during decoding, covering a very large range.",
"It is up to individual models to generate summaries as close as possible to the target length.",
"To evaluate the ability to select information, we utilize 2 In rest of this paper, LAAM refers to BART using LAAM as cross-attention, for simplicity.",
"3 Because historically, to test length control abilities, the test sets of the datasets are split into some predefined ranges, in this work, we adopt the same ranges in creating the bins.",
"Following Lewis et al. (2020), we train our model based on bart.large with lr = 3 e 05 and warmup = 500 .",
"We set the dropout as 0 .",
"1 and mo-mentum as 0 .",
"99 , and terminate the training when the lr < 1 .",
"0 e 5 .",
"At test time, the batch size is 32.",
"We set beam size as 4 for CNNDM and 6 for XSUM.",
"All experiments are done on an RTX 2080Ti GPU with 11G RAM.",
"ROUGE scores: ROUGE-1 (R-1), ROUGE-2 (R-2) and ROUGE-L (R-L) (Lin, 2004) by F1.",
"Variance (Var): Variance of the summary lengths against the desired length l : var = 0 .",
"Human Evaluation: We randomly select 50 samples from CNNDM and 50 samples from XSUM.",
"We ask three human annotators who are native or proficient English speakers to score the generated summaries under 3 aspects: Grammatically correct (Gram.): How grammatical the sentences of a summary",
"are?; Informativeness (Info.): How much important information about the source document is included in",
"summary?; Overall: How good is the overall quality of the summary on you criterion?",
"The score of each aspect will be judged as: Poor (1.0), Barely Acceptable (3.0) and Good (5.0).",
"Length control.",
"We use soft length control here.",
"As shown in Table 5 and Figure 2, LAAM and PtLAAM achieve higher ROUGE scores and lower variance than all other approaches, which means our approaches can generate good quality summaries with tighter length control.",
"LAAM and 6889 PtLAAM outperform BART, indicating that by controlling lengths effectively, summary quality can be improved, too.",
"LPAS performs better than LenAtten on ROUGE scores but worse on Var, because LPAS focuses more on information selection under the length constraint and overlooks where to stop decoding.",
"BLPAS is better than LPAS as using the pretrained BART as the basic model.",
"BART and BLPAS are considered the previous SOTA methods for length-agnostic summarization and length-controllable summarization respectively.",
"Therefore, we compare our approaches with BART and BLPAS in the remaining experiments.",
"Table 6 also confirms that compared with BART and BLPAS, our best approach PtLAAM gives the best quality summaries by human judges.",
"The summaries generated by PtLAAM achieve better scores in grammatically correct, informativeness and overall.",
"The human evaluation scores of XSUM are lower than those of CNNDM because the summaries in XSUM are much shorter.",
"It is more difficult for a shorter summary to ensure that it is grammatically correct and contains enough information.",
"4 We fine-tune the bart.large on CNNDM and XSUM via released code in https://github.com/pytorch/ fairseq/ .",
"Due to incompleteness of the data preprocessing code and possible variance in computing resources and parameters, the results of BART in Table 5 are slightly lower than published version but similar to the numbers reported by others, such as https://github.com/pytorch/ fairseq/issues/2541.",
"To further test the models' length control ability in different target length ranges, we divide the test data into different sets according to length range in Table 4, and test the models on these sets separately.",
"Figure 3 shows that LAAM and PtLAAM still achieve the lowest Var.",
"For the same length range in Figure 3 and Table 4, the more training data in this range, the lower Var of the generated summaries with respect to the reference summaries within this length range.",
"This denotes that the imbalance length distribution in training data interferes with controlling length.",
"In Figure 3, LAAM and PtLAAM have better and more stable ROUGE scores in all length ranges, illustrating that our approaches are not affected by the summary length distribution in training set and can generate better summaries with desired lengths.",
"The results of arbitrary length test are listed in Figure 4, the lower Var of LAAM and PtLAAM illustrate our approach can control summary length better.",
"As R-2 is the most popular metric in summarization, we report the R-2 related scores of generated summaries.",
"We compute R-2 Precision 6890 ( Pre ) of generated summaries instead of F1, because when the desired length of generated summaries is shorter than reference summary lengths, precision can reflect the accuracy of information selection within that limited budget.",
"In Figure 4, LAAM and PtLAAM get better R-2 (Pre) on both datasets, which means our approaches can select more accurate information.",
"As the desired length increases, the length-controllable models are more likely to select accurate information, causing the gap between our approach and BLPAS to gradually decrease.",
"Bart is not designed to control length, resulting in unchanged R-2 (Pre).",
"Although the arbitrary length test provides a unique perspective in the evaluation of the models, its automatic metric, i.e., R-2 (Pre) is only partial.",
"Therefore, in the rest of the section, we will not do arbitrary length test unless the result is evaluated by human.",
"Information selection.",
"Next, we apply hard length control on all models to strictly enforce the exact desired length which is equal to the gold length.",
"The better performance of our proposed approaches in Table 7 indicates that our approaches can cover more important information while producing exactly the same length of the reference summary.",
"Compared to Table 5, our approaches also demonstrate more consistency.",
"As shown in Table 8, the summaries are generated by the SOTA length-controllable approach BLPAS and our best approach PtLAAM with desired length as 10 tokens and 30 tokens.",
"For BLPAS, the summary with desired length as 10 is just the truncated version of the summary with desired length as 30 .",
"Different from BLPAS, the content of summaries generated by PtLAAM are changed according to different desired lengths, which denotes that PtLAAM is more effective in selecting information to be summarized by length constraint.",
"Ablation Studies.",
"We evaluate the effectiveness of the pretraining LAAM on LBD and length-aware attention mechanism.",
"Pretraining on LBD.",
"Compared with LAAM only training on original datasets, PtLAAM performs better on R-2 and Var in Figure 4 and Figure 3. The better R-2 scores indicates that the PtLAAM can select more important information with pretrained LAAM on our created dataset LBD.",
"As one source document of LBD may have different extracted summaries within different length ranges, the model trained on LBD can learn to select different information from source document according to the length constraints.",
"Besides, in LBD, the number of summaries with lengths in different ranges is balanced.",
"PtLAAM gets lower Var, which denotes it can control length better.",
"The Var scores in different length ranges are stable, which weakens the negative impact caused by the imbalanced length distribution of training data.",
"Length-aware attention mechanism, The length-aware attention consists of Attn is and Attn eos .",
"Table 9 shows the results of LAAM test on gold length test with soft length control.",
"Compared with LAAM, the LAAM without Attn is has a big drop in ROUGE scores and a small drop in Var score, demonstrating that Attn is mainly focuses on select information with length constraint.",
"The LAAM without Attn eos gets the much lower Var scores but not much difference in ROUGE scores than LAAM, which means that Attn eos is useful in limiting the output length.",
"LAAM outperforms its 6891 variant because of the effectiveness of length-aware attention mechanism.",
"Thus, in our experiments, we use PtLAAM model, which trains LAAM with both Attn is and Attn eos on LBD first and then fine-tunes the original datasets, as our best approach.",
"In this experiment, we use the modified dataset for zero-shot length control (Sec. 3.3).",
"Zero-shot task can test a model's ability to generalize to summary lengths that it has never seen in the original training data before.",
"Table 10 shows the performance of PtLAAM on ROUGE scores and Var on different datatsets are the best.",
"For soft length control experiment, the ROUGE scores of different models are similar, because the lengths of summaries generated by BLPAS are longer than reference summary lengths (BLPAS has higher Var scores), which causes the generated summaries to match more tokens in the reference.",
"Because ROUGE (F1) scores usually penalize summaries with longer lengths, PtLAAM, which controls the length better, is still better than other approaches.",
"The lowest Var of our approaches means that our approach can better control summary length.",
"In the hard length control experiment, the ROUGE scores of BLPAS drop a lot since the hard control shortens the length of summaries generated by BLPAS.",
"The best performance of PtLAAM on ROUGE indicate PtLAAM learns to select information based on desired lengths.",
"The ROUGE scores of our approaches are similar to those in soft length control experiment, which indicates our approaches are stable in controlling length.",
"The LAAM performs worse than PtLAAM on ROUGE and Var denotes that the ability of LAAM to control length is impacted by length distribution of the training data.",
"The pretraining on LBD is useful in generating high-quality summaries under desired summary length since the summaries are balanced in different length ranges of LBD.",
"In this section, we analyze the performance of different models in controlling length.",
"Input Document a gym teacher in new hampshire has been accused of posing as a young girl on a social media site and persuading an elementary school student to share inappropriate images of herself ... police charged 34-year-old paul johnson-yarosevich of acton , maine , on monday with prohibited use of computer after they say they discovered he 'd been fooling a pre-teen girl into sending him inappropriate photos of herself by posing as a young girl on social media .",
"We use the example in Table 11 to analyze different length-controllable methods since the summaries of this example generated by different models are obviously different in length control and information selection .",
"length of the summary generated by BART is always much longer for covering more information from source document.",
"After adding Exact at test time, BART can generate summary with length exactly the same as desired length.",
"But, as a early-stop during decoding methods , Exact always produce incomplete summaries.",
"The summary with 30 tokens of Exact repeats its summary with 10 tokens during generation.",
"Because such methods ignore that the summaries with different lengths of one document should represent different information of source document.",
"BLPAS tends to select more information with length constraints, which may generate summaries with length longer than desired length (the red part in Table 11).",
"The lengths of summaries generated by LAAM and PtLAAM in Table 11 are the same as the desired lengths.",
"Compared with PtLAAM, given the desired length as 10 , LAAM loses the important information about the reason why Paul was charged as there are few training pairs with summary lengths as 10 .",
"PtLAAM pretrained on LBD can select information according to various desired lengths as the summary lengths in LBD are evenly distributed in different length ranges.",
"The summaries with desired length as 30 of LAAM and PtLAAM are more similar than their summaries with desired length as 10 .",
"This is because there are many more summaries with length about 30 than those with length about 10 in original dataset.",
"Thus, PtLAAM is more effective in generating summaries of lengths that do not appear in the original datasets.",
"Previously, most length-controllable approaches in abstractive summarization focused on stoping decoding at a particular time.",
"Ad-hoc (Rush et al., 2015) generated the eos token by assigning a score of to the tokens in vocabulary and generated a fixed number of words.",
"LenEmb and LenInit (Kikuchi et al., 2016) input length embed-dings to decoder respectively.",
"Bian et al. (2019) took LenEmb and LenInit as an agent and adjusted the reward incorporating with the desired length.",
"LC (Liu et al., 2018) added the desired length into the first layer of CNN encoder.",
"GOLC (Makino et al., 2019) optimized LenEmb and LC by formalizing loss with an overlength penalty.",
"Fan et al. (2018) predefined some special markers to denote different length ranges and prepended the input with such markers during training and testing.",
"Takase and Okazaki (2019) extended the sinusoidal positional encoding (Vaswani et al., 2017) to take account of stepwise remaining length.",
"LenAtten (Yu et al., 2021) added a length attention unit to exploit proper length information based on the stepwise remaining length.",
"Other length-controllable approaches decided the content to be summarized by length-aware intermediate summaries.",
"LPAS (Saito et al., 2020) extracted a word sequence with the desired length from source document and generated summary by a non-length-controllable model with document and extracted summary as input.",
"MLS (Sarkhel et al., 2020) generated a general summary and then input it to a length-controllable model.",
"Compared with previous methods, our approach can effectively control the length of generated summaries by pretraining the length-controllable information selection model on length-balanced dataset.",
"Meanwhile, it can generate summaries with length approximate to the desired length in zero-shot controlling length problem.",
"Recently, the approaches fine-tune the pretrained transformer seq2seq models (Lewis et al., 2020; Zhang et al., 2020; Dou et al., 2021; Liu and Liu, 2021) on summarization datasets.",
"They achieve outstanding performances on summarization tasks.",
"Our approach is applied to transformer seq2seq model, which is orthogonal to above pretrained transformer models and can be added to them.",
"We present a novel approach to produce summaries in desired length that are fluent and coherent.",
"This approach pretrains a transformer seq2seq model whose cross attention between input and output are re-normalized accordingly to the length requirement.",
"The pretraining is done over synthetic summarization data extracted from the original training set but with summary lengths evenly distributed.",
"Our results show that the framework achieves a good balance between information selection from input documents and length control when producing summaries."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"abstain",
"result"
] |
[
"Prompts for pre-trained language models (PLMs) have shown remarkable performance by bridging the gap between pre-training tasks and various downstream tasks.",
"Among these methods, prompt tuning, which freezes PLMs and only tunes soft prompts, provides an efficient and effective solution for adapting large-scale PLMs to downstream tasks.",
"However, prompt tuning is yet to be fully explored.",
"In our pilot experiments, we find that prompt tuning performs comparably with conventional full-model tuning when downstream data are sufficient, whereas it is much worse under few-shot learning settings, which may hinder the application of prompt tuning.",
"We attribute this low performance to the manner of initializing soft prompts.",
"Therefore, in this work, we propose to pre-train prompts by adding soft prompts into the pre-training stage to obtain a better initialization.",
"We name this P retrained P rompt T uning framework PPT .",
"To ensure the generalization of PPT, we formulate similar classification tasks into a unified task form and pre-train soft prompts for this unified task.",
"Extensive experiments show that tuning pre-trained prompts for downstream tasks can reach or even outperform full-model fine-tuning under both full-data and few-shot settings.",
"Our approach is effective and efficient for using large-scale PLMs in practice.",
"The code is publicly available at https:// github.com/thu-coai/PPT .",
"Fine-tuning pre-trained language models (PLMs) (Devlin et al., 2019; Radford et al., 2019; Raffel et al., 2020) has made great progress in recent years.",
"By tuning the entire model parameters, the versatile knowledge acquired from large-scale unlabeled corpora can be adapted to handling Corresponding author.",
"various NLP tasks and outperform the approach of learning models from scratch (Han et al., 2021a).",
"For simplicity, we name this full-model tuning as FT.",
"As shown in Figure 1",
"(b) and",
"(c), there are two mainstream FT approaches.",
"The first one is task-oriented fine-tuning, where a task-specific head is added on top of PLMs, and the entire model is then fine-tuned by optimizing task-specific objectives on corresponding training data.",
"The second one is prompt-oriented fine-tuning (Schick and Schtze, 2021a), which is inspired by the recent works utilizing language prompts to probe the knowledge in PLMs (Petroni et al., 2019; Brown et al., 2020).",
"In prompt-oriented fine-tuning, data samples are converted to sequences containing prompt tokens, and downstream tasks are formalized as language modeling problems.",
"As shown in Figure 1",
"(c), by adding the prompt It was (cid:104) X (cid:105) . to a sentence, we can determine its sentiment polarity with PLMs by predicting great or terrible at the mask position.",
"As shown in Figure 1, compared to task-oriented fine-tuning, prompt-oriented fine-tuning is more similar to the pre-training objectives (masked language modeling), thereby helping to better use knowledge in PLMs and often obtaining better performance.",
"Although FT has shown promising results, with the rapid growth of model scale, fine-tuning and storing the entire large model for each downstream task becomes much more expensive.",
"To address this challenge, Lester et al. (2021) proposes prompt tuning (PT) to adapt large PLMs to downstream tasks cheaply, as shown in Figure 1",
"(d).",
"Specifi-cally, PT uses soft prompts composed of continuous embeddings instead of hard prompts (discrete language phrases).",
"These continuous prompts are generally randomly initialized and learned end-to-end.",
"To avoid storing the entire model for each downstream task, PT freezes all PLM parameters and merely tunes soft prompts, without adding any 8410 Model Layers Hard Prompt Tokens Soft Prompt Tokens",
"intermediate layers and task-specific components.",
"PT has two promising advantages.",
"First, soft prompts can be learned end-to-end in comparison to hard prompts.",
"Second, PT is an efficient and effective paradigm for the practical use of large-scale PLMs, which is comparable to FT when downstream data are sufficient (Figure",
"2(a)).",
"However, as shown in Figure",
"2(b), we find that PT performs much worse than FT under few-shot settings, which may hinder the application of PT in various low-resource scenarios.",
"Hence, in this paper, we explore how to use PLMs for few-shot learning in an efficient and effective manner through PT.",
"Specifically, we conduct pilot experiments to empirically analyze the effectiveness of PT on PLMs in Section 2, which is ignored by most existing works.",
"Our discoveries are as follows: (1) the verbalizer choice has a large impact on the performance; (2) simply initializing soft prompts with concrete word embeddings fails to improve the performance, yet (3) combining soft and hard prompts is helpful; and (4) all these methods cannot handle few-shot prompt tuning problems well.",
"The above observations reveal that prompt searching for PLMs is not trivial, and carefully initialized soft prompt tokens is crucial.",
"To help the model find suitable prompts, we pretrain these tokens with self-supervised tasks on large-scale unlabeled corpora.",
"To ensure the generalization of pre-trained prompts, we group typical classification tasks into three formats: sentence-pair classification, multiple-choice classification, and single-text classification, each format corresponding to one self-supervised pre-training task.",
"In addition, we find multiple-choice classification more general among these formats and we can unify all classification tasks to this format.",
"We name this P re-trained P rompt T uning framework PPT .",
"We evaluate PPT on several datasets based on three 11B PLMs: T5-XXL (Raffel et al., 2020), mT5-XXL (Xue et al., 2021) and CPM-2 (Zhang et al., 2022) in few-shot scenarios.",
"Experiments show that PPT can not only improve PT by a large margin, reaching or even outperforming FT methods, but also reduce the variance of few-shot learning.",
"Besides the effectiveness, PPT also retains the parameter efficiency of PT, which is valuable for future applications on large-scale PLMs.",
"In this section, we present pilot experiments of PT for few-shot learning.",
"We analyze three strategies including hybrid prompt tuning, verbalizer selec-8411 Hard Prompt Verbalizer Accuracy None good/bad 70 .",
"tion, and real word initialization.",
"We follow Lester et al. (2021) to test PT with T5-XXL (11B parameters) and use 100 tunable soft prompt tokens 1 .",
"Following Schick and Schtze (2021b), we randomly select 32 samples to construct the training set D train from the original training data.",
"To tune the hyper-parameters, we compose a validation set D dev from the original training data and ensure | D train | = | D dev | to simulate the few-shot learning setting (Perez et al., 2021).",
"We follow Zhang et al. (2021) and Gao et al. (2021) to use the original validation set as the test set D test , which means | D test | (cid:29) | D train | = | D dev | .",
"Hybrid Prompt Tuning In hybrid prompt tuning, both soft and hard prompts are used (Liu et al., 2021; Han et al., 2021b).",
"However, previous works train soft prompts jointly with the entire model.",
"In PT where only prompt tokens are tunable, the effectiveness of hybrid prompts is under-explored.",
"In Table 1, we show the results of combining soft prompts P with three manually designed hard prompts and two auto-generated hard prompts (Gao et al., 2021) on a sentiment classification task (Socher et al., 2013).",
"We can see that hard prompts improve PT, but still under-perform FT.",
"Furthermore, different hard prompts affect the performance remarkably, therefore much human labor for prompt design and selection is needed.",
"in Figure 1",
"(c) and",
"(d), the verbalizer maps the label Positive to great.",
"From Table 1 we can see that the choices of verbalizers influence the performance remarkably.",
"In general, common words that explain the meaning of corresponding labels work well.",
"This also guides our verbalizer selection for PPT in Section 3. Real Word Initialization In real word initialization, we use the embeddings of concrete words to initialize the soft prompt and test four initialization strategies.",
"The effectiveness of this approach has been verified on small PLMs (fewer than 3B parameters) in previous works (Lester et al., 2021).",
"However, from the experiments on SST-2 (Socher et al., 2013) and BoolQ (Clark et al., 2019) (Table 2), we find that for the 11B model, real word initialization has little or even negative impact on the performance in few-shot scenarios.",
"This suggests that observations on small models can not be directly adapted to large models and finding a good initialization for soft prompts is yet to be explored.",
"To summarize, although the above enhancement strategies cannot help PT achieve comparable results with FT under few-shot settings, they are still the key factors that influence the PT performance.",
"In the following sections, we describe our PPT framework and show in experiments that PPT not only provides a good prompt initialization, but also takes advantage of the good verbalizer, and is complementary to hybrid prompts.",
"In this section, we describe the whole framework of PPT, including how to pre-train prompts and",
"use these pre-trained prompts for specific tasks.",
"Following the approach of T5 (Raffel et al., 2020) and PT (Lester et al., 2021), we solve all downstream tasks in a text-to-text format.",
"As shown in Figure 1",
"(c), to reduce the objective gap between pre-training and downstream tasks, prompt-oriented fine-tuning converts downstream tasks into cloze-style objectives.",
"Taking classification for example, given an input sentence x V and its label y Y , a pattern mapping f : V (cid:55) V is first applied to convert x into a new sequence f ( x ) , where V is the vocabulary of PLMs.",
"f ( x ) not only adds some prompt tokens as hints, but also preserves the mask token (cid:104) X (cid:105) to let PLMs predict tokens at the masked positions.",
"Then, a verbalizer v : Y (cid:55) V is used to map y to some label tokens v ( y ) .",
"With f ( ) and v ( ) , a classification task can be represented by a pattern-verbalizer pair ( f, v ) : arg max (cid:88) x log p (cid:0) y | x ; (cid:1) = arg max (cid:88) x log p (cid:0) (cid:104) X (cid:105) = v ( y ) | f ( x ); (cid:1) , (1) where indicates all tunable parameters, especially the parameters of PLMs.",
"For convenience, we use PVP to denote this pattern-verbalizer pair (Schick and Schtze, 2021a).",
"In PT (Lester et al., 2021), a set of soft prompts P are concatenated to the beginning of the sequence and the model input becomes [ P ; f ( x )] , where [ ; ] is the concatenation operation.",
"By tuning P , Eq.",
"(1) is replaced by arg max P (cid:88) x log p (cid:0) (cid:104) X (cid:105) = v ( y ) | [ P ; f ( x )]; P (cid:1) .",
"Owing to the power of large-scale PLMs, Eq.",
"(2) is verified to be comparable to these FT methods under full-data settings.",
"However, we find it hard to learn effective soft prompts, which may result in low performance in various few-shot scenarios.",
"The parameter initialization usually has a large impact on the difficulty of the model training and optimization, and our pilot experiments have shown that existing initialization strategies have little or even negative impact on the PT performance of large-scale PLMs.",
"We refer more details of these pilot experiments to Section 4. Recently, pre-training has been proven to be an effective method to find a good model initialization.",
"Inspired by this, we propose to pre-train soft Iron Man sacrificed himself.",
"prompts.",
"We notice that some groups of downstream tasks are related to certain self-supervised tasks built on unlabeled pre-training corpora.",
"For instance, some tasks in the form of sentence-pair classification, such as natural language inference and sentence similarity, are similar to the next sentence prediction (NSP) (Devlin et al., 2019) task used in the pre-training stage.",
"As shown in Figure 3, these tasks all take two sentences as input and compare their semantic meanings.",
"Therefore, soft prompts pre-trained by NSP can be a good initialization for these sentence-pair tasks.",
"Formally, suppose we can divide downstream tasks into m groups {T 1 , T 2 , ..., T m } , where T i is the set containing n i downstream tasks: { PVP 1 i , PVP 2 i , ..., PVP n i i } , where PVP ki = ( f ki , v ki ) .",
"For each group, we design a corresponding pre-training task PVP pre i = ( f pre i , v pre i ) .",
"After pre-training soft prompts on these tasks with all model parameters fixed, we get m pre-trained prompts { P 1 , P 2 , ..., P m } .",
"Then, for each task PVP ki in T i , we continue to optimize Eq.",
"(2) by using P i as the soft prompts initialization.",
"In this section, we take three typical classification tasks as examples to describe the design of pattern-verbalizer pairs PVP pre i for prompt pre-training.",
"Sentence-pair classification tasks such as natural language inference and sentence similarity take two sentences x = ( s 1 , s 2 ) as the input.",
"To design a PVP for these tasks, we extend the next sentence prediction in Devlin et al. (2019) to a 3-class classification with labels Y = { 0 , 1 , 2 } as the pretraining task.",
"These labels in Y can respectively 8413 indicate that the semantic relation between two sentences is coherent (with label 2), similar (1) and irrelevant (0).",
"To construct signal from unlabeled documents, we set the two sentences next to each other as label 2, those from the same document but not true next sentences as 1, and those from different documents as 0.",
"We consider the label set |Y| 3 because this covers most sentence pair tasks.",
"PVP pre i = ( f pre i , v pre i ) is given as f pre i ( x ) = s 1 (cid:104) X (cid:105) . s 2 , v pre i ( Y ) = [ no , maybe , yes ] .",
"Designing PVP ki = ( f ki , v ki ) according to PVP pre i is simple.",
"s 1 and s 2 can be replaced by the input sentence pair.",
"If a task outputs two labels, then we take v ki ( Y ) = [ no , yes ] .",
"If a task outputs three labels, we set v ki = v pre i .",
"If a task requires to measure the similarity between two sentences, the probability over { no , yes } can serve for this task.",
"Many tasks can be formulated as multiple-choice classification, which takes a query and several answer candidates as the input.",
"We design a next sentence selection task to pre-train the prompt.",
"Given a sentence as the query s q , the model is trained to select the adjacent sentence from six candidates, denoted as s 1 s 6 and thus the label set is Y = { 1 , 2 , 3 , 4 , 5 , 6 } .",
"These candidates consist of the right answer, one sentence from the same document but is not adjacent to the query, and four sentences from other documents.",
"For x = ( s q , s 1 , s 2 , , s 6 ) , ( f pre i , v pre i ) is given as f pre i ( x ) = s q ? A. s 1 F. s 6 . Answer is (cid:104) X (cid:105) . , v pre i ( Y ) = [ A , B , C , D , E , F ] .",
"Most multiple-choice tasks can use { f pre i , v pre i } directly as their PVPs.",
"For tasks like reading comprehension, the input may contain a passage and a question.",
"We concatenate them to form the query.",
"For single-sentence classification, we create pseudo labels for prompt pre-training.",
"Taking sentiment classification as an example, we use another small model to annotate sentiment labels for the sentences from the pre-training corpus and filter out those with low classification probability.",
"In practice, we use a RoBERTa BASE (Liu et al., 2019) model fine-tuned on a 5-class sentiment classification dataset other than the few-shot datasets we evaluate on.",
"Then with a sentence s from the corpus, we have the input x = ( s ) and the label set Y = { 1 , 2 , 3 , 4 , 5 } .",
"( f pre i , v pre i ) is given as f pre i ( x ) = s . (cid:104) X (cid:105) . , v pre i ( Y ) = [ terrible , bad , maybe , good , great ] .",
"For sentiment classification tasks with 5 labels, we can use PVP ki = PVP pre i .",
"For those with fewer than 5 labels, we choose a subset from v pre i ( Y ) as labels.",
"Although the above method improves the model performance, we have to point out that it is still limited to generalize to other single-text classifications in different domains and with different numbers of labels.",
"Therefore, the method described in the following section is proposed to solve this problem.",
"The above-mentioned PVPs for pre-training can be unified to a single format: multiple-choice classification.",
"Specifically, for sentence-pair classification, the query is the concatenation of the two sentences and there are three options: no, maybe, and yes.",
"For single-sentence classification, the query is the input sentence and the options are the concrete labels.",
"Note that in this way, the pre-trained PVPs can be used in single text classification tasks from arbitrary domains and with much more labels.",
"Constructing a unified PVP is similar to the idea of MultiQA (Talmor and Berant, 2019) and Uni-fiedQA (Khashabi et al., 2020).",
"Recently, Zhong et al. (2021a) use some hard prompts to unify several tasks as a meta question answering task.",
"They tune the entire model with this meta task on a collection of QA datasets and then transfer to other classification tasks under low-resource settings.",
"However, our PPT focuses on tuning soft prompts with the main body of PLMs fixed and our pretraining is conducted on fully unsupervised data, rather than the collection of supervised datasets.",
"Since different tasks may have different candidate numbers and lengths, we construct pretraining samples with option numbers varying from 2 to 16 2 and option lengths from 50 to 20.",
"We use the PVP in Section 3.2.2 for pre-training, and then apply pre-trained soft prompts to cover the above mentioned three classification tasks.",
"We conduct experiments on both Chinese and English tasks (see Table 3).",
"As described in Section 2, for tasks with fewer than 5 labels, we construct D train and D dev with 32 samples from the original training data and ensure the number of labels is balanced.",
"For tasks with more than 5 labels like TNews and YahooAnswer, it is hard to compose a dataset with label-balanced samples.",
"Therefore, we randomly select 8 samples for each label.",
"For English datasets, we conduct PT based on T5-XXL with 11B parameters because previous works (Lester et al., 2021; Zhang et al., 2022) have shown that, T5-XXL is comparable with FT under the full-data setting.",
"We also evaluate FT on various sizes of T5 to verify that larger models perform better and thus improving PT based on T5-XXL is meaningful.",
"For Chinese datasets, we do PT based on a 11B model CPM-2.",
"Since CPM-2 does not provide other size models, we compare it with mT5 (Xue et al., 2021) of various sizes.",
"Consistently, we use 100 soft tokens for PT.",
"As a result, the tunable parameters is only 100 4096 = 4 .",
"1 10 5 = 410 K. Compared with the 11B ( 1 . 1 10 10 ) parameters of FT, PT only needs to store 3000 times smaller parameters for each task.",
"For prompt pre-training, we sample 10GB data from OpenWebText (Gokaslan et al., 2019) for English tasks and 10GB data from WuDaoCor-pora (Yuan et al., 2021) for Chinese tasks.",
"We use the Yelp-5 (Zhang et al., 2015a) dataset to train the RoBERTa BASE model mentioned in Section 3.2.3.",
"More details of the training hyper-parameters can be found in the Appendix C. 4.2 Main Results The main results of English and Chinese datasets are shown in Table 4. In the block FT, we present the FT results of the T5 model from the size small to XXL.",
"In the block PT, we show the results of PPT and other baselines.",
"The first baseline is Vanilla PT, where the soft prompts are randomly initialized from a normal distribution.",
"The second is the hybrid strategy in Section 2.",
"We also consider LM Adaption used in Lester et al. (2021) in which the T5 model is further pre-trained for 10K steps with language modeling to reduce the gap between the pre-training and PT.",
"We test two variants of PPT: Hybrid PPT, in which carefully designed hard prompts are combined with pre-trained soft prompt, and Unified PPT, in which all tasks are unified in the multiple-choice classification format.",
"Effectiveness From the Table 4 we have four observations.",
"First, larger models achieve better overall performance, which means increasing the model size still helps under the few-shot setting.",
"Therefore, we study PT on the large-scale pre-trained model.",
"Note that for Chinese experiments, CPM-2 and mT5-XXL share the same parameter scale.",
"Since CPM-2 outperforms mT5-XXL across all tasks, we use CPM-2 as the base model.",
"Second, PPT outperforms Vanilla PT and LM Adaption on most datasets significantly.",
"Although PPT is worse than Hybrid PT on BoolQ, combining PPT and hard prompts (Hybrid PPT) outperforms all baselines.",
"This means pre-training soft prompts and using hybrid prompts are complementary.",
"Similar phenomenons are observed on other datasets like RACE-m, LCQMC, and C 3 , where adding hard prompts to PPT continues to improve results.",
"Third, PPT outperforms FT on all Chinese datasets and most English datasets.",
"This indicates that there still remains a gap between masked language modeling and downstream tasks.",
"Prompt pre-training bridges this gap to some extend.",
"Based on this observation, an intuitive extension of our method is to further pre-train the entire model with PVP pre i and fine-tune the model to the corresponding downstream tasks.",
"However, since we focus on PT in this paper, we leave this as future work.",
"Fourth, PPT results in lower variances on most of the datasets.",
"Few-shot learning is notorious for its instability, which becomes very obvious in Vanilla PT.",
"For some datasets like SST-2, the variance reaches 15.5 which means the model does not perform better than random guesses under some 8415 English Tasks Model Method SST-2 SST-5 RACE-m RACE-h BoolQ RTE CB Acc.",
"random seeds.",
"Combining with hard prompt or further pre-training with language modeling can alleviate this problem to some extent.",
"But on some datasets like CCPM, Hybrid PT increases the variance and LM Adaption does not guarantee the average performance.",
"With the help of pre-training, the variance remains at a low level across all datasets.",
"Unified PPT Unifying all formats to multiple-choice classification format is another variant of PPT.",
"In Table 4, we can see that Unified PPT reaches comparable performance as PPT and Hybrid PPT, still outperforming other PT baselines.",
"However, the datasets we have considered so far have no more than 5 labels.",
"For tasks with more labels, especially single-text classification where pseudo label pre-training is not appropriate for cross-domain adaption, Unified PPT is a good alternative.",
"In Table 5, we test Unified PPT on datasets with more than 5 labels.",
"For PT and FT, we use TNews YahooAns n class 14 10 FT 43 .",
"a verbalizer to map the labels to the intuitively selected words.",
"PT (MC) means we solve the task in a multiple-choice classification format without prompt pre-training.",
"We do not use PPT for single-sentence classification discussed in Section 3.2.3 because it is hard to find other suitable datasets to train the pseudo label annotator.",
"However, we can see that Unified PPT still achieves the best performance, even exceeding FT by a large margin.",
"We discuss how the performance of FT, PT, and PPT varies when the number of training samples increases.",
"In Figure 4, we show the trend of these methods on the RACE-m and CB datasets.",
"For 32 to 128 samples, PPT is consistently better than PT, and the performances of the three methods gradually converge when the number grows to 256.",
"We also compare different tuning approaches given the full training data.",
"From Table 6, we can see that PPT and Unified PPT still outperform the Vanilla PT on most datasets.",
"In addition, we observe that although PT is faster than FT in a single optimization step, it converges much slower, which results in an even longer training time.",
"We argue that PPT can be an effective solution to this problem.",
"As shown in Figure 5, with the pre-trained initialization, PPT speeds up the convergence of Vanilla PT on both RACE-m and CB datasets.",
"We give a more detailed analysis of the training consumption in the Appendix E. Since PPT still converges a bit slower than FT, how to further accelerate the convergence of PT is worth studying in future work.",
"PLMs and Task-oriented Fine-tuning Recently, various powerful PLMs have been proposed, such as GPT (Radford et al., 2018), BERT (De-vlin et al., 2019), RoBERTa (Liu et al., 2019) and T5 (Raffel et al., 2020).",
"To adapt these PLMs to downstream NLP tasks, task-oriented fine-tuning has been proposed, where researchers use PLMs as the backbone and add some task-specific heads to optimize task-specific objectives.",
"Then, all parameters of both PLMs and additional heads are tuned using task-specific data.",
"Results have shown that task-oriented fine-tuning can outperform models trained from scratch on a series of NLP tasks.",
"Prompt-oriented Fine-tuning Most existing PLMs are pre-trained with language modeling objectives, yet the objectives of downstream tasks are quite different.",
"To overcome the gap between pretraining and downstream tasks, prompt-oriented fine-tuning is introduced.",
"In prompt-oriented fine-tuning, downstream tasks are also formalized as language modeling problems by inserting language prompts, and the results of language modeling can correspond to the solutions of downstream tasks.",
"Knowledge probing (Petroni et al., 2019; Trinh and Le, 2018; Davison et al., 2019) is the seminal work that stimulates the development of prompts.",
"In knowledge probing, language triggers are widely used to induce PLMs to generate relational facts.",
"These pioneering works demonstrate that language prompts can effectively stimulate the knowledge from PLMs.",
"Encouraged by this, manually designing hard prompts consisting of discrete words is first used in prompt-oriented fine-tuning Schick and Schtze (2021a,b).",
"Considering manually designing prompts is both time-consuming and difficult to find the best choice, later works (Gao et al., 2021; 8417 Jiang et al., 2020; Shin et al., 2020) proposed to generate prompts automatically.",
"However, these works still restrict auto-generated prompts to discrete spaces which are usually sub-optimal.",
"To overcome the shortcomings of discrete spaces, Li and Liang (2021); Liu et al. (2021); Han et al. (2021b); Hambardzumyan et al. (2021); Zhong et al. (2021b) explore to combine hard prompts and soft prompts.",
"Different from hard prompts using concrete and discrete tokens, soft prompts are composed of several continuous learnable embeddings, and these embeddings are randomly initialized.",
"To step forward, some works (Li and Liang, 2021; Qin and Eisner, 2021; Lester et al., 2021) propose to only tune soft prompts and fix the entire PLM parameters.",
"When models are large enough, this method can be comparable to full-model tuning.",
"Few-shot Learning with PLMs Since long-tail distribution is common in real-world applications, few-shot learning is quite meaningful for the stable and effective use of PLMs, thereby attracts much attention recently.",
"Apart from GPT-3 (Brown et al., 2020) and PET(Schick and Schtze, 2021a) which demonstrates the superiority of PLMs in few-shot scenarios, some later works Perez et al. (2021); Bragg et al. (2021) also discuss reasonable few-shot settings by restricting the size of validation set and proposing a unified framework to evaluate few-shot performance.",
"There is also work (IV et al., 2021) pointing out the low performance of PT for few-shot learning.",
"But they mostly focus on PLMs with fewer than 400M parameters.",
"In this paper, we study few-shot learning on large-scale 11B PLMs.",
"In this paper, we present PPT, a framework that improves prompt tuning for few-shot learning.",
"We propose to firstly unify downstream tasks to several formats.",
"Then, we design self-supervised pre-training tasks for each format and pre-train prompts on these tasks.",
"Finally, we do prompt tuning on downstream tasks based on the pre-trained initialization.",
"Extensive experiments show that our method significantly outperforms other prompt tuning baselines, performing comparable or even better than full-model tuning.",
"There are three important directions for future work: (1) Designing unified task formats and the corresponding pre-training objectives for other kinds of tasks such as language generation and relation extraction.",
"(2) Evaluating the few-shot performance of other parameter-efficient tuning approaches (He et al., 2022) and adapting unified task pre-training to them.",
"(3) Beyond the soft prompt, studying whether unified task pre-training helps the pre-trained language models itself.",
"This work was supported by the National Science Foundation for Distinguished Young Scholars (with No. 62125604) and the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096).",
"This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005."
] | [
"abstain",
"abstain",
"abstain",
"result",
"method",
"objective",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"result",
"abstain",
"result",
"method",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"result",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Aiming for a better integration of data-driven and linguistically-inspired approaches, we explore whether RST Nuclearity, assigning a binary assessment of importance between text segments, can be replaced by automatically generated, real-valued scores, in what we call a Weighted-RST framework.",
"In particular, we find that weighted discourse trees from auxiliary tasks can benefit key NLP downstream applications compared to nuclearity-centered approaches.",
"We further show that real-valued importance distributions partially and interestingly align with the assessment and uncertainty of human annotators.",
"Ideally, research in Natural Language Processing (NLP) should balance and integrate findings from machine learning approaches with insights and theories from linguistics.",
"With the enormous success of data-driven approaches over the last decades, this balance has arguably and excessively shifted, with linguistic theories playing a less and less critical role.",
"Even more importantly, there are only little attempts made to improve such theories in light of recent empirical results.",
"In the context of discourse, two main theories have emerged in the past: The Rhetorical Structure Theory (RST) (Carlson et al., 2002) and PDTB (Prasad et al., 2008).",
"In this paper, we focus on RST, exploring whether the underlying theory can be refined in a data-driven manner.",
"In general, RST postulates a complete discourse tree for a given document.",
"To obtain this formal representation as a projective consituency tree, a given document is first separated into so called Elementary Discourse Units (or short EDUs), representing clause-like sentence fragments of the input Equal contribution.",
"document.",
"Afterwards, the discourse tree is built by hierarchically aggregating EDUs into larger constituents annotated with an importance indicator (in RST called nuclearity) and a relation holding between siblings in the aggregation.",
"The nuclearity attribute in RST thereby assigns each sub-tree either a nucleus-attribute, indicating central importance of the sub-tree in the context of the document, or a satellite-attribute, categorizing the sub-tree as of peripheral importance.",
"The relation attribute further characterizes the connection between sub-trees (e.g. Elaboration, Cause, Contradiction).",
"One central requirement of the RST discourse theory, as for all linguistic theories, is that a trained human should be able to specify and interpret the discourse representations.",
"While this is a clear advantage when trying to generate explainable outcomes, it also introduces problematic, human-centered simplifications; the most radical of which is arguably the nuclearity attribute, indicating the importance among siblings.",
"Intuitively, such a coarse (binary) importance assessment does not allow to represent nuanced differences regarding sub-tree importance, which can potentially be critical for downstream tasks.",
"For instance, the importance of two nuclei siblings is rather intuitive to interpret.",
"However, having siblings annotated as nucleus-satellite or satellite-nucleus leaves the question on how much more important the nucleus sub-tree is compared to the satellite, as shown in Figure 1.",
"In general, it is unclear (and unlikely) that the actual importance distributions between siblings with the same nuclearity attribution are consistent.",
"Based on this observation, we investigate the potential of replacing the binary nuclearity assessment postulated by RST with automatically generated, real-valued importance scores in a new, W eightedRST framework.",
"In contrast with previous work that has assumed RST and developed Figure 1: Document wsj 0639 from the RST-DT corpus with inconsistent importance differences between N-S attributions.",
"computational models of discourse by simply applying machine learning methods to RST annotated treebanks (Ji and Eisenstein, 2014; Feng and Hirst, 2014; Joty et al., 2015; Li et al., 2016; Wang et al., 2017; Yu et al., 2018), we rely on very recent empirical studies showing that weighted silver-standard discourse trees can be inferred from auxiliary tasks such as sentiment analysis (Huber and Carenini, 2020b) and summarization (Xiao et al., 2021).",
"In our evaluation, we assess both, computational benefits and linguistic insights.",
"In particular, we find that automatically generated, weighted discourse trees can benefit key NLP downstream tasks.",
"We further show that real-valued importance scores (at least partially) align with human annotations and can interestingly also capture uncertainty in human annotators, implying some alignment of the importance distributions with linguistic ambiguity.",
"First introduced by Mann and Thompson (1988), the Rhetorical Structure Theory (RST) has been one of the primary guiding theories for discourse analysis (Carlson et al., 2002; Subba and Di Eugenio, 2009; Zeldes, 2017; Gessler et al., 2019; Liu and Zeldes, 2019), discourse parsing (Ji and Eisenstein, 2014; Feng and Hirst, 2014; Joty et al., 2015; Li et al., 2016; Wang et al., 2017; Yu et al., 2018), and text planning (Torrance, 2015; Gatt and Krahmer, 2018; Guz and Carenini, 2020).",
"The RST framework thereby comprehensively describes the organization of a document, guided by the author's communicative goals, encompassing three components: (1) A projective constituency tree structure, often referred to as the tree span.",
"(2) A nuclearity attribute, assigned to every internal node of the discourse tree, encoding relative importance between the nodes' sub-trees, with the nucleus expressing primary importance and a satellite signifying supplementary sub-trees.",
"(3) A relation attribute for every internal node describing the relationship between the sub-trees of a node (e.g., Contrast, Evidence, Contradiction).",
"Arguably, the weakest aspect of an RST representation is the nuclearity assessment, which makes a too coarse differentiation between primary and secondary importance of sub-trees.",
"However, despite its binary assignment of importance and even though the nuclearity attribute is only one of three components of an RST tree, it has major implications for many downstream tasks, as already shown early on by Marcu (1999), using the nuclearity attribute as the key signal in extractive summarization.",
"Further work in sentiment analysis (Bhatia et al., 2015) also showed the importance of nuclearity for the task by first converting the constituency tree into a dependency tree (more aligned with the nuclearity attribute) and then using that tree to predict sentiment more accurately.",
"Both of these results indicate that nuclearity, even in the coarse RST version, already contains valuable information.",
"Hence, we believe that this coarse-grained classification is reasonable when manually annotating discourse, but see it as a major point of improvement, if a more fine-grained assessment could be correctly assigned.",
"We therefore explore the potential of assigning a weighted nuclearity attribute in this paper.",
"While plenty of studies have highlighted the important role of discourse for real-world downstream tasks, including summarization, (Gerani et al., 2014; Xu et al., 2020; Xiao et al., 2020), sentiment analysis (Bhatia et al., 2015; Hogenboom et al., 2015; Nejat et al., 2017) and text classification (Ji and Smith, 2017), more critical to our approach is very recent work exploring such connection in the opposite direction.",
"In Huber and Carenini (2020b), we exploit sentiment related information to generate silver-standard nuclearity annotated discourse trees, showing their potential on the domain-transfer discourse parsing task.",
"Crucially for our purposes, this approach internally generates real-valued importance-weights for trees.",
"For the task of extractive summarization, we follow our intuition given in Xiao et al. (2020) and Xiao et al. (2021), exploiting the connection be-Figure 2: Three phases of our approach to generate weighted RST-style discourse trees.",
"Left and center steps are described in section 3, right component is described in section",
"4. = As in Huber and Carenini (2020b), = As in Marcu (1999), = Sentiment prediction component is a linear combination, mapping the aggregated embedding to the sentiment output.",
"The linear combination has been previously learned on the training portion of the dataset.",
"tween summarization and discourse.",
"In particular, in Xiao et al. (2021), we demonstrate that the self-attention matrix learned during the training of a transformer-based summarizer captures valid aspects of constituency and dependency discourse trees.",
"To summarize, building on our previous work on creating discourse trees through distant supervision, we take a first step towards generating weighted discourse trees from the sentiment analysis and summarization tasks.",
"Given the intuition from above, we combine information from machine learning approaches with insights from linguistics, replacing the human-centered nuclearity assignment with real-valued weights obtained from the sentiment analysis and summarization tasks 1 .",
"An overview of the process to generate weighted RST-style discourse trees is shown in Figure 2, containing the training phase (left) and the W-RST discourse inference phase (center) described here.",
"The W-RST discourse evaluation (right), is covered in section",
"4. 3.1 Weighted Trees from Sentiment To generate weighted discourse trees from sentiment, we slightly modify the publicly available code 2 presented in Huber and Carenini (2020b) by removing the nuclearity discretization component.",
"An overview of our method is shown in Figure 2 (top), while a detailed view is presented in the left and center parts of Figure",
"3. First (on the left), we train the Multiple Instance Learning (MIL) 1 Please note that both tasks use binarized discourse trees, as commonly used in computational models of RST.",
"2 Code available at https://github.com/nlpat/ MEGA-DT model proposed by Angelidis and Lapata (2018) on a corpus with document-level sentiment gold-labels, internally annotating each input-unit (in our case EDUs) with a sentimentand attention-score.",
"After the MIL model is trained (center), a tuple ( s i , a i ) containing a sentiment score s i and an attention a i is extracted for each EDU i .",
"Based on these tuples representing leaf nodes, the CKY algorithm (Jurafsky and Martin, 2014) is applied to find the tree structure to best align with the overall document sentiment, through a bottom-up aggregation approach defined as 3 : s p = s l a l + s r a r a l + a r a p = a l + a r 2 with nodes l and r as the left and right child-nodes of p respectively.",
"The attention scores ( a l , a r ) are here interpreted as the importance weights for the respective sub-trees ( w l = a l / ( a l + a r ) and w r = a r / ( a l + a r ) ), resulting in a complete, normalized and weighted discourse structure as required for W-RST.",
"We call the discourse treebank generated with this approach W-RST-Sent .",
"In order to derive weighted discourse trees from a summarization model we follow Xiao et al. (2021) 4 , generating weighted discourse trees from the self-attention matrices of a transformer-based summarization model.",
"An overview of our method is shown in Figure 2 (bottom), while a detailed view is presented in the left and center parts of Figure",
"4. We start by training a transformer-based extractive summarization model (left), containing three 3 Equations taken from Huber and Carenini (2020b) 4 Code available at https://github.com/ Wendy-Xiao/summ_guided_disco_parser Figure 3: Three phases of our approach.",
"With the trained transformer model, we then extract the self-attention matrix A and build a discourse tree in bottom-up fashion (as shown in the center of Figure 4).",
"Specifically, the self-attention matrix A reflects the relationships between units in the document, where entry A ij measures how much the i -th EDU relies on the j -th EDU.",
"Given this information, we generate an unlabeled constituency tree using the CKY algorithm (Jurafsky and Martin, 2014), optimizing the overall tree score, as previously done in Xiao et al. (2021).",
"In terms of weight-assignment, given a sub-tree spanning EDUs i to j , split into child-constituents at EDU k , then max( A i : k, ( k +1): j ) , representing the maximal attention value that any EDU in the left constituent is paying to an EDU in the right child-constituent, reflects how much the left sub-tree relies on the right sub-tree, while max( A ( k +1): j,i : k ) defines how much the right sub-tree depends on the left.",
"We define the importance-weights of the left ( w l ) and right ( w r ) sub-trees as: w l = max( A ( k +1): j,i : k ) / ( w l + w r ) w r = max( A i : k, ( k +1): j ) / ( w l + w r ) In this way, the importance scores of the two subtrees represent a real-valued distribution.",
"In combination with the unlabeled structure computation, we generate a weighted discourse tree for each document.",
"We call the discourse treebank generated with the summarization downstream information W-RST-Summ .",
"To assess the potential of W-RST, we consider two evaluation scenarios (Figure 2, right): (1) Apply weighted discourse trees to the tasks of sentiment analysis and summarization and (2) analyze the weight alignment with human annotations.",
"In this evaluation scenario, we address the question of whether W-RST trees can support downstream tasks better than traditional RST trees with nuclearity.",
"Specifically, we leverage the discourse trees learned from sentiment for the sentiment analysis task itself and, similarly, rely on the discourse trees learned from summarization to benefit the summarization task.",
"In order to predict the sentiment of a document in W-RST-Sent based on its weighted discourse tree, we need to introduce an additional source of information to be aggregated according to such tree.",
"Here, we choose word embeddings, as commonly used as an initial transformation in many models tackling the sentiment prediction task (Kim, 2014; Tai et al., 2015; Yang et al., 2016; Adhikari et al., 2019; Huber and Carenini, 2020a).",
"To avoid introducing additional confounding factors through sophisticated tree aggregation approaches (e.g. TreeLSTMs (Tai et al., 2015)), we select a simple method, aiming to directly compare the inferred tree-structures and allowing us to better assess the performance differences originating from the weight/nuclearity attribution (see right step in Figure 3).",
"More specifically, we start by computing the average word-embedding for each leaf node leaf i (here containing a single EDU) in the discourse tree.",
"j< | leaf |",
"With | leaf i | as the number of words in leaf i , Emb ( ) being the embedding lookup and word ji representing word j within leaf i .",
"Subsequently, we aggregate constituents, starting from the leaf nodes (with leaf i as embedding constituent c i ), according to the weights of the discourse tree.",
"For any two sibling constituents c l and c r of the parent sub-tree c p in the binary tree, we compute c p = c l w l + c r w r with w l and w r as the real-valued weight-distribution extracted from the inferred discourse tree and c p , c l and c r as dense encodings.",
"We aggregate the complete document in bottom-up fashion, eventually reaching a root node embedding containing a tree-weighted average of the leaf-nodes.",
"Given the root-node embedding representing a complete document, a simple Multilayer Perceptron (MLP) trained on the original training portion of the MIL model is used to predict the sentiment of the document.",
"In the evaluation step of the summarization model (right of Figure 4), we use the weighted discourse tree of a document in W-RST-Summ to predict its extractive summary by applying an adaptation of",
"We choose this straightforward algorithm over more elaborate and hyper-parameter heavy approaches to avoid confounding factors, since our aim is to evaluate solely the potential of the weighted discourse trees compared to standard RST-style annotations.",
"In the original algorithm, a summary is computed based on the nuclearity attribute by recursively computing the importance scores for all units as: S n ( u, N ) = d N , u P rom ( N ) S ( u, C ( N )) s.t. u C ( N ) otherwise where C ( N ) represents the child of N , and P rom ( N ) is the promotion set of node N , which is defined in bottom-up fashion as follows: (1) P rom of a leaf node is the leaf node itself.",
"(2) P rom of an internal node is the union of the promotion sets of its nucleus children.",
"Furthermore, d N represents the level of a node N , computed as the distance from the level of the lowest leaf-node.",
"This way, units in the promotion set originating from nodes that are higher up in the discourse tree are ampli-fied in their importance compared to those from lower levels.",
"As for the W-RST-Summ discourse trees with real-valued importance-weights, we adapt Marcu's algorithm by replacing the promotion set with real-valued importance scores as shown here: S w ( u, N ) = d + w N , N is leaf S w ( u, C ( N )) + w N , u C ( N ) otherwise Once S n or S w are computed, the top-k units of the highest promotion set or with the highest importance scores respectively are selected into the final summary.",
"To test whether the W-RST trees are effectively predicting the downstream tasks, we need to generate traditional RST trees with nuclearity to compare against.",
"However, moving from weighted discourse trees to coarse nuclearity requires the introduction of a threshold.",
"More specifically, while nucleus-satellite and satellite-nucleus assignments can be naturally generated depending on the distinct weights, in order to assign the third nucleus-nucleus class, frequently appearing in Figure 5: Three phases of our approach.",
"This way, RST-style treebanks with nuclearity attributions can be generated from W-RST-Sent and W-RST-Summ and used for the sentiment analysis and summarization downstream tasks.",
"For the nuclearity-attributed baseline of the sentiment task, we use a similar approach as for the W-RST evaluation procedure, but assign two distinct weights w n and w s to the nucleus and satellite child respectively.",
"Since it is not clear how much more important a nucleus node is compared to a satellite using the traditional RST notation, we define the two weights based on the threshold t as: w n = 1 (1 2 t ) / 4 w s = (1 2 t ) / 4 The intuition behind this formulation is that for a high threshold t (e.g. 0 . 8 ), the nuclearity needs to be very prominent (the difference between the normalized weights needs to exceed 0 . 8 ), making the nucleus clearly more important than the satellite, while for a small threshold (e.g. 0 . 1 ), even relatively balanced weights (for example w l = 0 . 56 , w r = 0 . 44 ) will be assigned as nucleus-satellite, resulting in the potential difference in importance of the siblings to be less eminent.",
"For the nuclearity-attributed baseline for summarization, we directly apply the original algorithm by Marcu (1999) as described in section 4.1.2.",
"However, when using the promotion set to determine which EDUs are added to the summarization, potential ties can occur.",
"Since the discourse tree does not provide any information on how to prioritize those, we randomly select units from the candidates, whenever there is a tie.",
"This avoids exploiting any positional bias in the data (e.g. the lead bias), which would confound the results.",
"As for our second W-RST discourse evaluation task, we investigate if the real-valued importance-weights align with human annotations.",
"To be able to explore this scenario, we generate weighted tree annotations for an existing discourse treebank (RST-DT (Carlson et al., 2002)).",
"In this evaluation task we verify if: (1) The nucleus in a gold-annotation generally receives more weight than a satellite (i.e. if importance-weights generally favour nuclei over satellites) and, similarly, if nucleus-nucleus relations receive more balanced weights.",
"(2) In accordance with Figure 1, we further explore how well the weights capture the extend to which a relation is dominated by the nucleus.",
"Here, our intuition is that for inconsistent human nuclearity annotations the spread should generally be lower than for consistent annotations, assuming that human misalignment in the discourse annotation indicates ambivalence on the importance of sub-trees.",
"To test for these two properties, we use discourse documents individually annotated by two human annotators and analyze each sub-tree within the doubly-annotated documents with consistent inter-annotator structure assessment for their nuclearity assignment.",
"For each of the 6 possible inter-annotator nuclearity assessments, consisting of 3 consistent annotation classes (namely N-N/N-N, N-S/N-S and S-N/S-N) and 3 inconsistent annotation classes (namely N-N/N-S, N-N/S-N and N-S/S-N) 5 , we explore the respective weight distribution of the document annotated with the two W-RST tasks sentiment analysis and summarization (see Figure 5).",
"We compute an average spread s c for each of the 6 inter-annotator nuclearity assessments classes c as: s c = ( j< | c | (cid:88) j =0 w jl w jr ) / | c | With w j l and w jr as the weights of the left and right child node of sub-tree j in class c , respectively.",
"Sentiment Analysis: We follow our previous approach in Huber and Carenini (2020b) for the model training and W-RST discourse inference steps (left and center in Figure 3) using the adapted MILNet model from Angelidis and Lapata (2018) trained with a batch-size of 200 and 100 neurons in a single layer bi-directional GRU with 20% dropout for 25 epochs.",
"Next, discourse trees are generated using the best-performing heuristic CKY method with the stochastic exploration-exploitation trade-off from Huber and Carenini (2020b) (beam size 10 , linear decreasing ).",
"As word-embeddings in the W-RST discourse evaluation (right in Figure 3), we use GloVe embeddings (Pennington et al., 2014), which previous work (Tai et al., 2015; Huber and Carenini, 2020a) indicates to be suitable for aggregation in discourse processing.",
"For training and evaluation of the sentiment analysis task, we use the 5-class Yelp'13 review dataset (Tang et al., 2015).",
"To compare our approach against the traditional RST approach with nuclearity, we explore the impact of 11 distinct thresholds for the baseline described in 4.1.3, ranging from 0 to 1 in 0 .",
"1 intervals.",
"Summarization: To be consistent with RST, our summarizer extracts EDUs instead of sentences from a given document.",
"The model is trained on the EDU-segmented CNNDM dataset containing EDU-level Oracle labels published by Xu et al. (2020).",
"We further use a pre-trained BERT-base (uncased) model to generate the embeddings of EDUs.",
"The transformer used is the standard model with 6 layers and 8 heads in each layer ( d = 512 ).",
"We train the extractive summarizer on the training set of the CNNDM corpus (Nallapati et al., 2016) and pick the best attention head using the RST-DT dataset (Carlson et al., 2002) as the development set.",
"We test the trees by running the summarization algorithm in Marcu (1999) on the test set of the CNNDM dataset, and select the top-6 EDUs based on the importance score to form a summary in natural order.",
"Regarding the baseline model using thresholds, we apply the same 11 thresholds as for the sentiment analysis task.",
"As discussed in 4.2, this evaluation requires two parallel human generated discourse trees for every document.",
"Luckily, in the RST-DT corpus pub-0 .",
"lished by Carlson et al. (2002), 53 of the 385 documents annotated with full RST-style discourse trees are doubly tagged by a second linguist.",
"We use the 53 documents containing 1 , 354 consistent structure annotations between the two analysts to evaluate the linguistic alignment of our generated W-RST documents with human discourse interpretations.",
"Out of the 1 , 354 structure-aligned subtrees, in 1 , 139 cases both annotators agreed on the nuclearity attribute, while 215 times a nuclearity mismatch appeared, as shown in detail in Table 1.",
"The results of the experiments on the discourse applications for sentiment analysis and summarization are shown in Figure 6.",
"The results for Sent N-N N-S S-N N-N -0.228 (106) -0.238 (33) -0.240 (19) N-S --0.038 (325) -0.044 (22) S-N --0.278 (115) Summ N-N N-S S-N N-N 0.572 (136) 0.604 (42) 0.506 (25) N-S -0.713 (418) 0.518 (36) S-N -0.616 (134) Table 2: Confusion Matrices based on human annotation showing the absolute weight-spread using the Sentiment (top) and Summarization (bottom) tasks on 620 and 791 sub-trees aligned with the human structure prediction, respectively.",
"sentiment analysis (top) and summarization (bot-tom) thereby show a similar trend: With an increasing threshold and therefore a larger number of N-N relations (shown as grey bars in the Figure), the standard RST baseline (blue line) consistently improves for the respective performance measure of both tasks.",
"However, reaching the best performance at a threshold of 0 .",
"8 for sentiment analysis and 0 .",
"6 for summarization, the performance starts to deteriorate.",
"This general trend seems reasonable, given that N-N relations represent a rather frequent nuclearity connection, however classifying every connection as N-N leads to a severe loss of information.",
"Furthermore, the performance suggests that while the N-N class is important in both cases, the optimal threshold varies depending on the task and potentially also the corpus used, making further task-specific fine-tuning steps mandatory.",
"The weighted discourse trees following our W-RST approach, on the other hand, do not require the definition of a threshold, resulting in a single, promising performance (red line) for both tasks in Figure 6.",
"For comparison, we apply the generated trees of a standard RST-style discourse parser (here the Two-Stage parser by Wang et al. (2017)) trained on the RST-DT dataset (Carlson et al., 2002) on both downstream tasks.",
"The fully-supervised parser reaches an accuracy of 44.77% for sentiment analysis and an average ROUGE score of 26.28 for summarization.",
"While the average ROUGE score Sent N-N N-S S-N N-N -0.36 -0.43 -0.45 N-S +1.00 +0.96 S-N - -0.72 Summ N-N N-S S-N N-N -0.13 +0.13 -0.66 N-S +1.00 -0.56 S-N - +0.22 Table 3: Confusion Matrices based on human annotation showing the weight-spread relative to the task-average for Sentiment (top) and Summarization (bot-tom), aligned with the human structure prediction, respectively.",
"of the fully-supervised parser is above the performance of our W-RST results for the summarization task, the accuracy on the sentiment analysis task is well below our approach.",
"We believe that these results are a direct indication of the problematic domain adaptation of fully supervised discourse parsers, where the application on a similar domain (Wall Street Journal articles vs. CNN-Daily Mail articles) leads to superior performances compared to our distantly supervised method, however, with larger domain shifts (Wall Street Journal articles vs. Yelp customer reviews), the performance drops sig-nificantly, allowing our distantly supervised model to outperform the supervised discourse trees for the downstream task.",
"Arguably, this indicates that although our weighted approach is still not competitive with fully-supervised models in the same domain, it is the most promising solution available for cross-domain discourse parsing.",
"With respect to exploring the weight alignment with human annotations , we show a set of confusion matrices based on human annotation for each W-RST discourse generation task on the absolute and relative weight-spread in Tables 2 and 3 respectively.",
"The results for the sentiment analysis task are shown on the top of both tables, while the performance for the summarization task is shown at the bottom.",
"For instance, the top right cell of the upper confusion matrix in Table 2 shows that for 19 sub-trees in the doubly annotated subset of RST-DT one of the annotators labelled the subtree with a nucleus-nucleus nuclearity attribution, while the second annotator identified it as satellite-nucleus.",
"The average weight spread (see 4.2) for those 19 sub-trees is 0 .",
"24 .",
"Regarding Table 3, we subtract the average spread across Table 2 defined as = (cid:80) c i C ( c i ) / | C | (with C = { c 1 , c 2 , ...c 6 } containing the cell values in the upper triangle matrix) from each cell value c i and normalize by max = max c i C ( | c i | ) , with = 0 .",
"177 and max = 0 .",
"1396 across the top table.",
"Accordingly, we transform the 0 .",
"24 in the top right cell into ( 0 . 24 avg ) /max = 0 .",
"45 .",
"Moving to the analysis of the results, we find the following trends in this experiment: (1) As presented in Table 2, the sentiment analysis task tends to strongly over-predict S-N (i.e., w l << w r ), leading to negative spreads in all cells.",
"In contrast, the summarization task is heavily skewed towards N-S assignments (i.e., w l >> w r ), leading to exclusively positive spreads.",
"We believe both trends are consistent with the intrinsic properties of the tasks, given that the general structure of reviews tends to become more important towards the end of a review (leading to increased S-N assignments), while for summarization, the lead bias potentially produces the overall strong nucleus-satellite trend.",
"(2) To investigate the relative weight spreads for different human annotations (i.e., between cells) beyond the trends shown in Table 2, we normalize values within a table by subtracting the average and scaling between [ 1 , 1] .",
"As a result, Table 3 shows the relative weight spread for different human annotations.",
"Apart from the general trends described in Table 2, the consistently annotated samples of the two linguists (along the diagonal of the confusion matrices) align reasonably.",
"The most positive weight spread is consistently found in the agreed-upon nucleus-satellite case, while the nucleus-nucleus annotation has, as expected, the lowest divergence (i.e., closest to zero) along the diagonal in Table",
"3. (3) Regarding the inconsistently annotated samples (shown in the triangle matrix above the diagonal) it becomes clear that in the sentiment analysis model the values for the N-N/N-S and N-N/S-N annotated samples (top row in Table 3) are relatively close to the average value.",
"This indicates that, similar to the nucleus-nucleus case, the weights are also ambivalent, with the N-N/N-S value (top center) slightly larger than the value for N-N/S-N (top right).",
"The N-S/S-N case for the sentiment analysis model is less aligned with our intuition, showing a strongly negative weight-spread (i.e. w l << w r ) where we would have expected a more ambivalent result with w l w r (however, aligned with the overall trend shown in Table 2).",
"For summarization, we see a very similar trend with the values for N-N/N-S and N-N/S-N annotated samples.",
"Again, both values are close to the average, with the N-N/N-S cell showing a more positive spread than N-N/S-N.",
"However for summarization, the consistent satellite-nucleus annotation (bottom right cell) seems misaligned with the rest of the table, following instead the general trend for summarization described in Table",
"2. All in all, the results suggest that the values in most cells are well aligned with what we would expect regarding the relative spread.",
"Interestingly, human uncertainty appears to be reasonably captured in the weights, which seem to contain more fine grained information about the relative importance of sibling sub-trees.",
"We propose W-RST as a new discourse framework, where the binary nuclearity assessment postulated by RST is replaced with more expressive weights, that can be automatically generated from auxiliary tasks.",
"A series of experiments indicate that W-RST is beneficial to the two key NLP downstream tasks of sentiment analysis and summarization.",
"Further, we show that W-RST trees interestingly align with the uncertainty of human annotations.",
"For the future, we plan to develop a neural discourse parser that learns to predict importance weights instead of nuclearity attributions when trained on large W-RST treebanks.",
"More longer term, we want to explore other aspects of RST that can be refined in light of empirical results, plan to integrate our results into state-of-the-art sentiment analysis and summarization approaches (e.g. Xu et al. (2020)) and generate parallel W-RST structures in a multi-task manner to improve the generality of the discourse trees.",
"We thank the anonymous reviewers for their insightful comments.",
"This research was supported by the Language & Speech Innovation Lab of Cloud BU, Huawei Technologies Co., Ltd and the Natural Sciences and Engineering Research Council of Canada (NSERC).",
"Nous remercions le Conseil de recherches en sciences naturelles et en genie du Canada (CRSNG) de son soutien."
] | [
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"abstain",
"objective",
"method",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"result",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"Fine-grained Entity Typing (FET) has made great progress based on distant supervision but still suffers from label noise.",
"Existing FET noise learning methods rely on prediction distributions in an instance-independent manner, which causes the problem of confirmation bias.",
"In this work, we propose a clustering-based loss correction framework named Feature Cluster Loss Correction (FCLC), to address these two problems.",
"FCLC first train a coarse backbone model as a feature extractor and noise estimator.",
"Loss correction is then applied to each feature cluster, learning directly from the noisy labels.",
"Experimental results on three public datasets show that FCLC achieves the best performance over existing competitive systems.",
"Auxiliary experiments further demonstrate that FCLC is stable to hyperparameters and it does help mitigate confirmation bias.",
"We also find that in the extreme case of no clean data, the FCLC framework still achieves competitive performance.",
"Fine-grained entity typing (FET) is the task of classifying named entity mentions in a sentence over the given class set (typically a hierarchical class structure as shown in Fig.",
"1. FET serves as an important component in many down-stream NLP applications, e.g., relation extraction (Liu et al., 2014), entity linking (Raiman and Raiman, 2018) and question answering (Dong et al., 2015).",
"FET task has a more wide range of entity types (usu-ally over 100 classes) compared to entity typing, and hence neural-based FET systems require large-scale annotated training corpus.",
"as the ground-truth labels.",
"Although large-scale annotated data is provided, it brings about label noises in training.",
"To overcome the problem of noisy label, some works directly pruned noisy instances (Gillick et al., 2014; Onoe and Durrett, 2019a).",
"The others retain noisy training data but further improve by choosing (Ren et al., 2016a; Xu and Barbosa, 2018), weighting (Wu et al., 2019), and relabeling (Zhang et al., 2020) noisy labels using the prediction distribution.",
"However, these noise combating methods have two major limitations.",
"1) They rely on the prediction distribution.",
"As a result, they ought to cope with instance-agnostic noise better.",
"The previous works expirically show (Zheng and Yang, 2021) that the prediction distribution is more likely to be affected by noisy instances and suffer from confirmation bias .",
"This bias problem is also verified in our Sec. 3.5.",
"The limitation leads to the intriguing question: Besides prediction distribution and entropy, what other information can we use to model label noise?",
"2) They mostly aim to modify each instance isolatedly and only use instance-level information.",
"Meanwhile, typical anti-noise machine learning (Patrini et al., 2017; Hendrycks et al., 2018) uses instance-agnostic global statistics.",
"The latter is 1997 more robust to noise but might be too general.",
"Local information is potentially more informative.",
"For example, when the distant supervision introduces similar noise in some instances, these noises form a locality in feature space.",
"The noisy instances are near to each other and are separate from instances with the same but true labels.",
"Our experiment result is similar to Fig. 1, even when the feature extractor is trained to fit noisy labels, they are still easily separable due to underlying semantic differences.",
"These two limitations are inter-related, causing noise-learning-based FET methods to still suffer from distantly supervised noise.",
"To alleviate the label noise and avert these limitations, we propose a novel framework FCLC for noisy label learning inspired by weighted training and loss correction (Hendrycks et al., 2018) in machine learning.",
"Our method utilizes feature representations from the model and learns global (local) information, i.e. a cluster-level label confusion matrix.",
"Firstly, we use a backbone learner on noisy data.",
"It serves as a feature extractor and a noise estimator.",
"Secondly, all training data, including noisy data and a small portion of clean data are clustered.",
"The clean data serve as anchors in the feature space to estimate label corruption and sample quality of each cluster.",
"Finally, label corruption and sample quality are used for label correction.",
"Our main contributions are three-fold: ( i ) This study provides fresh insight into instance dependent label noise in FET.",
"We pointed out a novel training method to further exploit feature space and global information.",
"( ii )",
"We designed a framework with feature clustering, estimating cluster-level confusion matrix, and loss correction.",
"( iii )",
"We experimented the proposed method on three datasets.",
"Results show that we made significant improvements over previous state-of-the-art, thus proving the effectiveness of our model.",
"Ablation studies further prove the robustness and wide applicability of our framework.",
"Given a finite set of types, T = { t 1 , t 2 , ..., t | T | } , where | T | denotes the number of candidate types.",
"The task is to assign appropriate types to each mention under context.",
"Formally, an instance is a triplet, ( m, c, y ) .",
"c = { w 1 , w 2 , ..., w n } is the context of m , usually the original sentence.",
"m = { w p 1 , ..., w p l } is the mention.",
"obviously, m is a continuous subsequence of c .",
"Y T denotes appropriate types for ( m, c ) .",
"For convenience, denote Y 's vector form y { 0 , 1 } | T | , y j = 1 means t j Y .",
"When the instance is produced with crowd-sourcing or distant supervision, annotated labels might contain so-called noise.",
"We denote labels with noise y .",
"The instance is thus ( m, c, y ) .",
"Denote the corpus with noisy instances D , the corpus with trusted instances D t .",
"* The two corpus form the whole training corpus D .",
"As shown in Fig. 2, the FCLC framework consists of the following steps :",
"Step",
"1. (Phase 1) Train the backbone model with noisy data D for e 1 epochs and get M 1 .",
"It serves as a feature extractor and a noise estimator.",
"(Sec. 2.3)",
"Step",
"2. Cluster all training samples D with the feature extracted by E 1 , and estimate confusion matrix for each cluster with predictions of M 1 .",
"(Sec. 2.4)",
"Step",
"3. (Phase 2) The calculated clustering-aware confusion matrix and FCLC loss are used to continue training the backbone model.",
"(Sec. 2.5) 2.3 Backbone For fair comparison, the backbone of our model has the same structure as NFETC (Xu and Barbosa, 2018).",
"For an instance ( m, c, y ) , for each word w i in c , word embedding is e wi R d w looked up in word embedding matrix W R d w | V | .",
"A position embedding e pi R d p is used to model the context word position i and mention position ( p 1 , p l ) by looking up relative position in position embedding matrix P R d p 2 N .",
"The final embedding is the concatenation e i = [ e wi , e pi ] .",
"Context Representation A Bi-LSTM (Hochre-iter and Schmidhuber, 1997) is used to model the context representation.",
"Feeding the embedding of c i.e. { e 1 , e 2 , ..., e n } into BiLSTM gets the two directional hidden states h i and h i for each word w i .",
"Word level attention weighted sum following (Zhou et al., 2016) is applied on h i = [ h i h i ] , resulting in the final context representation r c R d c , * Normally |D t | | D| , as in all the datasets we reported in this paper.",
"where means element-wise sum and d c is the hidden size of the BiLSTM and the dimension of the context embedding.",
"Mention Representation The average encoder of a mention takes word embeddings of the mention { e p 1 , e p 2 , ..., e p l } and takes the average: r w = 1 l (cid:80) lk =1 e p k .",
"The LSTM encoder of a mention takes an extended mention with one more token before and after the original mention and produces hidden state features { h p 1 1 , ..., h p l +1 } .",
"Take the last output h p l +1 as r l .",
"The final representation of the mention is r m = [ r w , r l ] Classification Softmax classifier and cross-entropy are used based on the feature r m,c = [ r c , r m ] of x : s ( x ) = W r m,c + b (1) p ( y | x ) = softmax( s ( x )) (2) ( x, y ; ) = log ( p ( y | x )) (3) With a given dataset D , the model is trained with all samples ( x, y ) in D .",
"We make the assumption that the noise ( y , y ) forms locality in the feature space, especially when the feature is calculated from the original mention and",
"context ( m, c ) , ( m, c ) determines y , and the feature is trained with y .",
"We adopt clustering to utilize local statistics as smaller-grained feature information.",
"To be specific, we perform k-means with r m,c on the whole training set D , and separate D into K clusters.",
"Denote the k -th cluster C k , C t k = C k D t , C k = C k D .",
"We mainly utilize the two following statistics: k = |C t k | |D k | (5) k estimates the quality of the cluster k .",
"It acts as a soft cluster sieving.",
"where A ik = { ( x, y ) | ( x, y ) C t k and y i = 1 } , (cid:98) C ijk estimates the probability in cluster k to annotate noise j for true label i .",
"The idea of forward loss correction is proposed by Patrini et al. (2017).",
"The basic idea is to modify the loss with the noise transition matrix T .",
"Such that the minimizer under the new loss with noisy labels is the same as the minimizer of the original loss under clean labels.",
"The modification relies on the assumption that the label noise is independent from instances, i.e. y x | y .",
"Hendrycks et al. (2018) proposed to estimate T with a small set of clean labels, under the assumption that y y | x .",
"While these assumptions do not hold globally for distantly 1999 supervised FET, they hold better in clusters.",
"We introduce the cluster-wise loss correction in the following sections.",
"Transition Matrix Estimation Assuming the backbone model is well trained, i.e. p ( y j = 1 | x ) is close enough to p ( y j = 1 | x ) .",
"We use the predicted probability on trusted instances in clusterk to estimate the transition probability.",
"C ijk = p ( y j = 1 | y i = 1 , x C k ) p ( y j = 1 | y i = 1 , x C t k ) 1 | A ik | (cid:88) ( x,y ) A ik p ( y j = 1 | x ) = (cid:98) C ijk (7) Forward Loss Correction Cross-entropy is composite (Reid and Williamson, 2010),denote it as , its inverse link function 1 is softmax .",
"Notice C ijk can bridge the loss with noisy label y , ( x C k , y i = 1) , to predictions for the true label: log ( p ( y | x )) log c (cid:88) j =1 C jik p ( y j = 1 | x ) (8) Let T k = C k , define the forward loss as: ( s ( x )) = ( T k s ( x )) (9) The property holds on each cluster similar as in (Patrini et al., 2017), with all x C k , training with noisy label y on is the same as with true label y on the original loss : argmin s E x , y ( s ( x )) = argmin s E x , y ( s ( x )) (10) Different from global forward loss correction, the parameters that minimize the loss in each cluster are not the same.",
"We balance the clusters with k .",
"The trusted samples ( x, y ) D t are also used.",
"The loss of the full model is: LFCLC = (cid:80) ( x,y ) D t ( s ( x )) + (cid:80) Kk =1 k (cid:80) ( x, y ) C k ( s ( x ))) +(1 ) (cid:80) Kk =1 k (cid:80) ( x, y ) C k ( s ( x ))) (11) Where is the hyperparameter to balance FCLC loss and the original loss.",
"Our introduced framework has several advantages: 1) Lightweight .",
"This method does not include extra trainable parameters to the backbone model.",
"2) Stable .",
"The framework involves two hyperparameters, and phase-1 train epochs e 1 and we empirically find them stable.",
"3) Flexibility .",
"Our improvement is orthogonal to the backbone model.",
"It only requires that the backbone model is sufficiently expressive and uses an appropriate composite loss (Reid and Williamson, 2010).",
"Thus, it is pluggable to a large number of FET models.",
"We evaluate the proposed model on three different FET datasets and compare it to several state-of-the-art models.",
"In addition, to support our claims we also conduct several subsidiary experiments to analyze the impacts of our proposed module in detail.",
"The datasets are described below, we use exactly the same train/dev/test split with previous works (Ren et al., 2016a; Chen et al., 2019).",
"Detailed statistics of the three datasets are also shown in Table",
"1. BBN It contains sentences extracted from the Wall Street Journal and distantly labeled by DBpedia Spotlight (Weischedel and Brunstein, 2005).",
"OntoNotes It was constructed using sentences in the OntoNotes corpus and distantly supervised by DBpedia Spotlight (Weischedel et al., 2013).",
"Wiki/FIGER It was derived from Wikipedia articles and news reports, entities of the training samples are distantly annotated using Freebase (Ling and Weld, 2012).",
"We follow prior work and use the strict accuracy (Acc), Macro F1 (Ma-F1), and Micro F1 (Mi-F1) scores.",
"During the experiment, all these metrics are calculated by running the model five times and computing the mean and standard deviation values.",
"We consider the following competitive FET systems as our baselines: (1) AFET (Ren et al., 2016a); (2) Attentive (Shimaoka et al., 2016); (3) NFETC/NFETC hier (Xu and Barbosa, 2018); (4) CLSC/CLSC hier (Chen et al., 2019); (5) NFETC-AR/NFETC-AR hier (Zhang et al., 2020); (6) NFETC-VAT/CLSC-VAT (Shi et al., 2020); (7) Multi Level Learning to Rank (ML-L2R) (Chen et al., 2020); (8) Box (Onoe et al., 2021).",
"These baselines are compared with several variants of our proposed model: (1) FCLC : proposed model without the hierarchical loss; (2) FCLC hier proposed model with the hierarchical loss; (3) FCLC (without k ) our proposed model trained without cluster quality estimation, i.e. = 1 for all clusters; (4) FCLC (without loss correction) our proposed model without loss correction, only cluster quality estimation working; (5) FCLC (without cluster) our proposed model without clustering, i.e. calculated a globally-uniform confusion matrix; (6) FCLC (with reinit): our proposed model with fresh parameters before the start of step 3 as suggested by Patrini et al. (2017).",
"(3)-(6) are implemented based on and should be compared with the best configuration between FCLC and FCLC hier on each dataset, that is, compared with FCLC on BBN and compared with FCLC hier on Wiki and OntoNotes.",
"To make an equal comparison, following (Xu and Barbosa, 2018; Chen et al., 2019; Zhang et al., 2020), we use exactly the same pre-trained 300-dimensional GloVe word embeddings (Pennington et al., 2014) and fix the embedding vectors during training.",
"The model parameters are optimized using the Adam (Kingma and Ba, 2014) optimizer.",
"All of our models are implemented in Tensorflow.",
"The implementation of our model can be cound at https://github.com/Los-Phoenix/NFETC-FCLC.",
"As NFETC and NFETC hier are our backbone models, we follow the hyper-parameters of the backbone except for our introduced hyper-parameters and e 1 .",
"The detailed hyper-parameter settings on the three datasets are shown in Table 2, we also report hyper-parameter impact curves in Fig.",
"3. 3.5 Results and Analysis Main Result Table 3 shows the results of our proposed approach ( FCLC ) and several competitive FET systems.",
"We highlight the statistically significant best scores of each metric in bold.",
"According to the experimental results, we make two main observations: (1) The performances of our proposed model surpass the backbone NFETC model by a remarkable large margin (improving Micro F1 by 2.1%, 3.8%, and 7.8% separately), demonstrating the benefits of the proposed two-phase FCLC module.",
"The relative performance improvements are consistent with or without the hierarchy loss (compared FCLC and FCLC hier to the corresponding baselines).",
"(2) Compared to other noisy learning methods such as CLSC, NFETC-AR, and VAT, our model still achieves considerable improvements under most metrics when using the same backbone and very similar hyper-parameter settings.",
"For example, compared to NFETC-AR, our model improves Micro-F1 by 1.25% to 6.38% on three datasets.",
"It indicates that, by utilizing both the feature space representations and the global and local statistical information, the model can reduce the impact of noisy labels more effectively.",
"Ablation Study To study the detail of our models, we explore the performances of three main model variants, shown in the last several rows of Table",
"3. We find that the cluster quality k , the loss correction module and the feature cluster process are all critical to model performances in some situations.",
"Specifically, as shown in FCLC (without cluster), feature clustering has minor impacts on Wiki and Ontonotes.",
"This is probably because the noisy distribution on these two datasets is relatively simple and the global confusion matrix is sufficient.",
"Moreover, we observe that the re-initialization before Step 3 has a great impact on all metrics.",
"Staring Step 3 with a fresh re-initialized FET model degrades the accuracy by 3.2% on Ontonotes.",
"It denotes that the learner trained in the first phase is beneficial for the noisy robust learning process, by providing optimal parameters initialization.",
"Sensitivity of the introduced hyper-parameters Using the same setting for model training, Fig. 3 analyses the sensitivity of FCLC to the introduced hyper-parameters: the FCLC objective weight , the Step-1 training epochs e 1 .",
"Fig. 3(a,",
"b) shows the performance trend on the Ontonotes and BBN datasets when changing .",
"While selecting a proper ratio between loss-correction loss and the original loss is important, the performance near optimum is stable and steadily outperforms the baseline.",
"Fig. 3(c,",
"d) analyses the sensitivity with respect to e 1 .",
"the Micro-F1 improves as e 1 increases but stops improving and become unstable when e 1 is large enough, since the model starts to overfit noise.",
"It is also reasonable that the optimal range of and e 1 in BBN and Ontonotes are different as they have different training set sizes and different distance supervision noise distribution.",
"Will cluster number affect performance?",
"We investigate how much the FCLC model benefits from different values of feature cluster number k .",
"Fig. 5 demonstrates that under a reasonable feature cluster range (near | T | ), the model can achieve competitive and similar performances.",
"How many trusted instances does the model need?",
"We examine the robustness of the model to the amount of clean data by comparing the performances with 5% to 100% trusted instances.",
"Refer to Fig. 4, we observe that due to the differences of the training set, our model achieves comparable accuracy with 30%, 40%, and 70% D t samples on 2002 -10 -5 0 5 10 15 20 25 30",
"Wiki, Ontonotes, and BBN separately.",
"With only a very small size of trusted instances, e.g. 20% BBN trusted set, or 128 samples, the model begins to improve significantly.",
"What if we did not have any trusted instances?",
"Although a small number of clean samples is always practical to obtain or relabel with an expert, we push the limit to no trusted instances at all.",
"What performance can our model achieve in such a situation?",
"We performed the \"no clean training set\" experiment to test the robustness of our model.",
"In Table 4, FCLC (w/o D t ) indicates for the variant that the trusted instances are not used for phase 2 training but only in feature clustering and confusion matrix calculation.",
"In that situation, our approach still has similar performances with previous SOTA models on most metrics .",
"FCLC (w/ pl) variant means that, during the clustering process, instead of using the trusted instance set D t split from the training set, we introduce a simple and classic pseudo labeling method (Lee et al., 2013) to generate the labels needed by clustering and training.",
"We find that compared to the baseline method, FCLC with pseudo labeling still achieves much better performances.",
"It is worth pointing out that it means our model is trained with fewer instances than previous SOTA, since D t is a part from the training set they use.",
"Visualization of the representations We analyze the role of FCLC module by visualizing the feature vectors.",
"Fig. 6 illustrates samples in a cluster (circled in all 4 sub-figures).",
"From Fig.",
"6(a), we observe that the backbone model fails to distinguish some samples of class A (/ORGANIZATION/GOVERN-MENT, red) and class B (/GPE/COUNTRY, blue), due to noisy labels.",
"Fig.",
"6(b) shows that our model learns to correct these instances.",
"With FCLC the classifier is corrected to predict the right label.",
"Meanwhile, in feature space, the boundary between these samples and the confusing class is also clearer, which means FCLC also helps to refine feature extraction with loss correction.",
"Fig.",
"6(e) shows the row of '/GPE/COUNTRY'.",
"Managing to notice the confusion from '/GPE/COUNTRY' to '/ORGANIZATION/GOVERNMENT' enables our model to perform the appropriate correction.",
"Due to this, FCLC are resistant to the noisy labels.",
"further verify our claim that our model can alleviate the confirmation bias in the noisy FET task, we analyze the prediction confidence on test set samples, as shown in Fig. 7.",
"The average confidence of correct and wrong test samples is calculated after each training epoch.",
"The results show that, on the Wiki dataset, after phase one the wrong sample average confidence is 0.700 but the backbone model reached 0.833 at the end of the training (with early stopping).",
"Also, after phase two FCLC improves the correct sample confidence from back-bone's 0.939 to 0.950 on Wiki.",
"The usage of datasets collected with distant supervision often results in so-called noisy labels.",
"Several studies have investigated deep learning approaches with noise.",
"Existing noisy learning methods include designing robust loss functions (Wang et al., 2019), designing robust architectures by adding noise adaptation layers (Chen and Gupta, 2015; Goldberger and Ben-Reuven, 2017), selecting samples (Onoe and Durrett, 2019b), and adding noise-robust regularization (Shi et al., 2020).",
"Among them, Patrini et al. (2017) and Hendrycks et al. (2018) proposed forward loss correction.",
"It avoided explicit relabeling and matrix inversion.",
"These noisy learning methods are mostly restricted to the 2003",
"noise that is conditionally independent of the data features (Frnay and Verleysen, 2014).",
"However, in real-world applications such as FET, noise distributions are more complex and instance-dependent, requiring more powerful noisy learning methods.",
"FET is studied based on the distant supervision training data (Mintz et al., 2009; Ling and Weld, 2012).",
"Various features (Yogatama et al., 2015; Xu and Barbosa, 2018), network structures (Dong et al., 2015; Shimaoka et al., 2016), and feature space (Ali et al., 2021; Onoe et al., 2021)are explored to refine the mention and type representation.",
"Label inter-dependency (Lin and Ji, 2019) and type hierarchy (Chen et al., 2020) are often used, added by relations among instances and labels (Ali et al., 2020; Li et al., 2021; Liu et al., 2021).",
"Label noise is the main problem brought by distance supervision.",
"Besides common noisy learning methods discussed in Sec. 4.1 (Onoe and Durrett, 2019b; Shi et al., 2020; Wu et al., 2019), FET-specific noise combat methods are proposed.",
"Ren et al. (2016a,b) utilized partial-label embedding.",
"Xu and Barbosa (2018) modified hierarchical loss to cope with overly-specific noise.",
"Zhang et al. (2020) automatically generated pseudo-truth label distribution for each sample.",
"Additional resource also help to improve the performance.",
"The resource include external knowledge base (Xin et al., 2018; Dai et al., 2019), and with BERT-like pipeline (Pa-tel and Ferraro, 2020; Ding et al., 2021).",
"Choi et al. (2018) proposed a way to utilize more distance supervision and crowd source, followed by Onoe and Durrett (2019b).",
"Apart from the above, (Chen et al., 2019) and (Ali et al., 2020) are the closest to our proposed method.",
"They both select some instances by feature distance to modify labels or refine mention representation for noisy instances.",
"However, their refinement is still explicit and isolated to each instance.",
"Thus the quality relies on the instances they retrieve for label propagation/men-tion reference.",
"Different from these studies, we do not rely on any of these external resources and aim to impose label noise with only the original data without explicit sieving or label changing.",
"In this work, in order to tackle the instance-dependent label noise in fine-grained entity typing tasks, we present a neural FET noisy learning",
"framework that utilizes the feature space information and global information jointly.",
"Experimental results on three publicly available datasets demonstrate that our proposed model achieves the best performance compared with competitive existing FET systems.",
"Furthermore, based on extensive auxiliary experiments, we study the impact of our proposed noisy learning framework in-depth with qualitative and quantitative analysis.",
"In the future, the proposed approach can motivate the need for further understanding of the relationships between dataset noise distribution estimation and the instance features.",
"More work can be done towards this direction.",
"In addition, performances of the proposed framework under different backbone models can be dug to validate the flexibility of the framework.",
"This work was supported by the National Key Research and Development Project of China (No. 2021ZD0110700)."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Large-scale pretrained language models have achieved SOTA results on NLP tasks.",
"However, they have been shown vulnerable to adversarial attacks especially for logographic languages like Chinese.",
"In this work, we propose ROCBERT : a pretrained Chinese Bert that is robust to various forms of adversarial attacks like word perturbation, synonyms, typos, etc.",
"It is pretrained with the contrastive learning objective which maximizes the label consistency under different synthesized adversarial examples.",
"The model takes as input multimodal information including the semantic, phonetic and visual features.",
"We show all these features are important to the model robustness since the attack can be performed in all the three forms.",
"Across 5 Chinese NLU tasks, ROCBERT outperforms strong baselines under three black-box adversarial algorithms without sacrificing the performance on clean testset.",
"It also performs the best in the toxic content detection task under human-made attacks.",
"Large-scale pretrained models, by finetuning on sufficient annotated data, have been able to approach or even surpass human performance on many benchmark testsets (Peters et al., 2018; Radford et al.; Devlin et al., 2019; Liu et al., 2019; Brown et al., 2020).",
"However, even pretrained with huge amounts of text, the models are still vulnerable under adversarial attacks like synonyms, word deletion/swapping, misspelling, etc (Li et al., 2019a; Jin et al., 2020; Sun et al., 2020a; Eger and Benz, 2020).",
"These adversarial examples occur frequently in the real-world scenario and can be made either naturally (e.g., typos) or maliciously (e.g., to avoid auto detection of toxic content) 1 .",
"The lack of robustness with them can easily lead to large performance drop when testing in the noisy real-world traffic.",
"The issue is particularly outstanding for logographic languages like Chinese since the attack can be either with the glyph character, pinyin (the romanized phonetic representations) or a combination of them (Wang et al., 2020; Li et al., 2020d; Zhang et al., 2020; Nuo et al., 2020).",
"We show some examples in Table 1.",
"The word (Kobe) can be replaced with synonyms, phonetically or visually similar words.",
"The attacker can also replace the character with its pinyin then continue the attack in the alphabet-level (keb1 in the table).",
"The isolation of semantics and phonetics, and the rich set of glyph characters in written Chinese makes the attacking forms much more diverse than in alphabetic languages like English.",
"Current research works usually adopt two ways to defend adversarial attacks: (1) Run spell checking to correct the written errors before feeding to the prediction model (Pruthi et al., 2019; Li et al., 2020b; Mozes et al., 2021), and (2) Adversarial training, which adds adversarial example to the training data (Zang et al., 2020; Li et al., 2020a; Liu et al., 2020).",
"For the former, Chinese spell checking itself is even a more difficult task because it requires the model to accurately recover the original text.",
"Any tiny errors of the spell checking can 921 lead to unpredicted model behaviors.",
"For the latter, it is hard for the model to adapt to all adversarial variants only in the finetuning stage, especially when the training data is sparse (Meng et al., 2021).",
"To address the above challenges, we propose ROCBERT , a Robust Chinese BERT pretrained with the contrastive learning objective by maximizing the label consistency under various adversarial examples.",
"The adversarial examples are synthesized from an algorithm that encapsulates common types of attacks.",
"We also consider combinatorial attacks where multiple types of attacks can be added on top of each other, which has never been considered in previous research.",
"To defend attacks in all levels, we incorporate multimodal information into the encoder.",
"The phonetic and visual features are inserted into one self-attention layer then dynamically fused in later layers.",
"Across 5 standard NLU tasks and one toxic content detection task, we show the pretrained model achieves new SOTAs under various adversarial attackers.",
"In short, our contribution are (1) We propose pretraining a robust Chinese Bert with adversarial contrastive learning, such that the model can perform well on not only clean testbeds, but also adversarial examples.",
"(2) The model is pretrained with synthesized adversarial examples covering combinations of semantic, phonetic and visual attacks.",
"It takes as input multimodal features to handle all levels of possible attacks.",
"(3) The pretrained model outperforms strong baselines across 5 NLU tasks and 1 toxic content detection task under various adversarial attackers.",
"(4) We perform an extensive ablation studies for pretraining options and have a wide comparison with popular defending methods, which we hope will benefit future research.",
"Adversarial attack There have been a lot of works showing the vulnerability of NLP models under adversarial examples (Li et al., 2020c; Garg and Ramakrishnan, 2020; Zang et al., 2020), which are understandable by humans yet lead to significant model prediction drops.",
"There are usually two types of attacks: (1) semantic equivalent replacement, which can be synthesized by replacing words based on vector similarity (Jin et al., 2020; Wang et al., 2020), WordNet synonyms (Zang et al., 2020), masked prediction from pretrained models (Li et al., 2020c; Garg and Ramakrishnan, 2020; Li et al., 2020d), etc. (2) noise injection, which can be synthesized by adding/deleting/swapping words (Li et al., 2019a; Gil et al., 2019; Sun et al., 2020a), replacing words with phonetically or visually similar ones (Eger et al., 2019; Eger and Benz, 2020).",
"For logographic languages like Chinese, the noise can be much more complex as it can be injected on both the glyph characters or romanized pinyins (Zhang et al., 2020; Nuo et al., 2020).",
"Adversarial defense The most common way of adversarial defense is adversarial training, which simply appends synthesized adversarial examples into the training data (Zang et al., 2020; Li et al., 2020a).",
"Nonetheless, it relies only on the limited labeled training data.",
"In contrast, the proposed ROCBERT is pretrained on billions of text and can better adapted to diverse adversarial variants.",
"Another popular way is to first remove the noise with off-the-shelf spell checkers, then feed the corrected text into the model (Li et al., 2020b).",
"However, Chinese spell checking requires fully recovering the correct text and current model performances are far from satisfactory (Liu et al., 2021; Xu et al., 2021; Wang et al., 2021a).",
"Any tiny error in the spell checking process can lead to unpredicted model behaviors.",
"It also incurs significant latency to model prediction.",
"ROCBERT does not add additional latency and can perform well even if fully recovery is difficult due to its consistency-maximization pretraining objective.",
"There have also been works on pretraining more robust models through virtual adversarial training and noise regularization (Yoo and Qi, 2021; Wang et al., 2021b; Meng et al., 2021), but they perform poorly on man-made attacks.",
"As we focus on Chinese in this paper and Chinese characters are much more diverse than in alphabetical languages, we design the following 5 Chinese-specific attacking algorithms first.",
"phonetic: Replace a Chinese character with a random homonym (ignoring diacritics).",
"For poly-phones, we consider the 2 most common pinyins 2 .",
"Visual: Replace Chinese characters with their visually similar characters (with the similarity table in the Kanji Database Project) 3 .",
"Character Split: Split one character into two parts with every part still being (or visually similar to) a valid Chinese character.",
"We follow the Chinese 2 https://unicode.org/charts/unihan.html 3 http://kanji-database.sourceforge.net/ 922 Figure 1: Adversarial example synthesis process.",
"splitting ways for Chinese characters in total.",
"Synonym: Segment Chinese characters into words with the jieba tokenizer 5 , then randomly replace the word with one of its synonyms.",
"Two words are treated as synonym if they share a similarity score of over 0.75 6 .",
"We only replace adjectives or nouns as we find other words can be hardly replaced without changing the semantics.",
"Character to Pinyin: Replaces the character into its pinyin representation (without diacritics).",
"Apart from Chinese characters, there are often other characters like the pinyin, numbers, punctuations and foreign words in the Chinese corpus.",
"The following 4 types of attacks apply to not only Chinese characters, but also all other characters.",
"Unicode: Randomly sample one of the visually similar unicodes as a replacement 7 .",
"Random Insertion: Sample one character from the vocabulary set, then randomly insert the character to the left or right of the current character.",
"Swap: Swap the character with its neighbor.",
"Deletion: Delete the character directly.",
"Examples of all types of attacks are in Table 1.",
"The synthesis process of adversarial examples is as follow: Given one sentence, we first select several",
"4 https://github.com/kfcd/chaizi 5 https://github.com/fxsjy/jieba 6 https://github.com/chatopera/Synonyms 7 http://www.unicode.org/Public/security/revision-03/confusablesSummary.txt",
"characters to attack.",
"For each selected character, we then combine the above mentioned character-level attacking algorithms 8 to get its attacked form.",
"Attack Ratio: The attack ratio decides how many characters we will attack.",
"Let n c be the number of characters in the sentence, we define as: = min(max( int ( (cid:15) ) , 1) , n c ) (cid:15) N (max(1 , 0 . 15 n c ) , 1) (1) where the int function rounds (cid:15) into the closest integer.",
"The intuition is that we want to attack 15% of the characters on average 9 .",
"If the sentence is short, we will make sure to attack at least one character.",
"We insert normal Gaussian noise on top of the average ratio to add some randomness.",
"Character Selection: There have been many research works showing that attacking informative words is more effective than random words (Li et al., 2019a; Sun et al., 2020a).",
"Therefore, we decide the chance of one character c i being selected based on its informativeness in the sentence.",
"Let w ( c i ) denote the word c i belongs to, the informative score for c i is counted as the difference of the language model loss after deleting w ( c i ) (denoted as L ( (cid:79) w ( c i ) ) (Li et al., 2016) 10 .",
"The chance that c i will be selected to be attacked is: p ( c i ) = e L ( (cid:79) w ( c i )) | w ( c i ) | (cid:80) n w j =1 e L ( (cid:79) w j ) (2) where n w is the number of words in the sentence.",
"| w ( c i ) | means the number of characters in w ( c i ) such that characters in the same word have equal chances to be selected.",
"Attack Combination: There can be combinations of attacks for one character.",
"For example, we can transfer one Chinese character into its pinyin then continue to attack it in the alphabet level (to pinyin + unicode in Table 1).",
"We define it as a sequential process where a new attack can be added on top at each step.",
"Specifically, the new character c after all the attack combinations applied to c is: c = AS ( c ) A 2 A 1 ( c ) p ( S ( c ) = k ) = q (1 q ) k 1 (3) 8 For synonym replacement which applies in the word level, we apply it on the word that the selected character belongs to.",
"9 The ratio is chosen by manual annotation.",
"15% is the highest ratio we can attack without hurting human reading.",
"10 We use ChineseGPT (Zhang et al., 2021) as the language model, so word here means the subword token defined in the vocabulary of ChineseGPT.",
"wher means applying a new attacking algorithm A to the output of the last step.",
"At each step i , the attacking algorithm A i is randomly selected from all algorithms that are applicable to the output from step i 1 .",
"S ( c ) is the number of attacking steps applied to c , which follows an exponentially decay function.",
"We set q = 0 .",
"7 empirically.",
"The full process of adversarial example synthesis is illustreated in Figure 1.",
"With the above-mentioned algorithm to sample adversarial examples, we can pretrain the model with the multimodal contrastive learning objective.",
"We follow the standard Bert architecture (Devlin et al., 2019) as our backbone, based on which we integrate phonetic and visual features for input text.",
"Feature Representation: For every character c in our vocabulary, apart from the standard semantic embedding Se ( c ) , we include two more vectors P h ( c ) and V i ( c ) to encode its phonetic and visual features respectively.",
"If c is not a Chinese character, it has its own phonetic vector.",
"Otherwise, P h ( c ) = (cid:80) k pinyin ( c ) P h ( k ) where pinyin ( c ) is its pinyin sequence.",
"V i ( c ) is extracted from its 32 32 image I ( c ) .",
"The image is in simsun ( ) for Chinese characters and arial for others, the default fonts for most online text.",
"V i ( c ) is defined as: V i ( c ) = LayerNorm ( MT ResNet 18( I ( c ))) (4) M is a learnable matrix and we utilize Resnet18 (He et al., 2016) to map I ( c ) into a one-dimentional vector (freezed during training).",
"Visual Representation Pretrain: To get an reasonable initialization, we add another pretraining stage only for the visual representation.",
"Phonetic representations are randomly initialized 11 .",
"M in Eq 4 is pretrained with the same contrastive loss as in Eq",
"5. The positive sample for the character c is its visually adversarial form c = A ( c ) .",
"A U ( visual, character split, unicode ) , which means uniform sampling from the three visual attacking algorithms mentioned in 3.",
"If c is split into two characters c 1 and c 2 , we sum the visual representation of the two split characters V i ( c ) = V i ( c 1 ) + V i ( c 2 ) .",
"The negative samples 11 We show in Section 5.3 that pretraining is necessary for visual features not but for phonetic features.",
"are all other characters in the same batch.",
"After training, visually similar characters will be close in their representation space.",
"Feature Integration: A straightforward way to integrate these multimodal features is to fuse them before fed to the encoder (Sun et al., 2021; Liu et al., 2021).",
"However, three features will be given equal weights and the model cannot dynamically attend to only useful features.",
"Another way is a two-step encoding which first decides the weight, then encode with selective attention (Xu et al., 2021), but it will significantly slow down the system.",
"We propose a lightweight fusion method layer-insert , which insert multimodal features in only one encoder layer.",
"Let H k ( i ) denote the representation of the i th word in the k th layer, we insert by: W 1 = KT 1 H k ( i ) H k ( i ) V 1 W 2 = KT 2 H k ( i ) P h ( i ) V 2 W 3 = KT 3 H k ( i ) V i ( i ) V 3 H k ( i ) = W 1 H k ( i ) + W 2 P h ( i ) + W 3 V i ( i ) W 1 + W 2 + W 3 where P h ( i ) and V i ( i ) are the phonetic and visual representations and K j /V j are learnable matrices.",
"Intuitively we can use the layer 0 to k 1 to decide the weights of three multimodal representations and use the rest layers for sentence representation learning.",
"It allows dynamic fusion according to sentence context yet adds marginal complexity.",
"The model loss has two components: the contrastive learning loss and the standard masked language model (MLM) loss.",
"Contrastive Learning: The idea of contrastive learning (Chen et al., 2020; Kim et al., 2021) is that the representation space should be made closer for similar (positive) samples and farther for dissimilar (negative) samples.",
"For each sentence, we treat its adversarial form (obtained from the algorithm in 3) as positive and all the other sentences in the same batch as negative.",
"Given a batch with N sentences, the loss to the i th sentence s i is: L c ( i ) = log e sim ( s i , s i ) / (cid:80) Nj =1 e sim ( s i ,s j ) / , (5) where is a temperature hyperparameter and s i is the adversarial example synthesized from s i .",
"We set = 0 .",
"01 based on our pilot experiments and 924 define sim ( s i , s i ) as h (cid:62) i h i (cid:107) h i (cid:107)(cid:107) h i (cid:107) , which is the cosine similarity in their representation space h i and h i .",
"Mix with MLM: We mix the contrastive learning loss with the standard masked language model (MLM) loss (Devlin et al., 2019) to enable both sentence and word level representation learning.",
"We use a character-based tokenizer because (1) Chinese characters as themselves stand for individual semantic units (Li et al., 2019b) and (2) char-based models are much more robust under noisy and adversarial scenarios (El Boukkouri et al., 2020).",
"For Chinese characters, we use two masking strategies Whole Word Masking (WWM) and Char Masking (CM) because a large number of words in Chinese consist of multiple characters (Cui et al., 2019; Sun et al., 2021).",
"The contrastive learning loss and the MLM loss are equally weighted.",
"Model Details We use a vocabulary size of 16224, out of which 14642 are Chinese characters.",
"We provide two versions of ROCBERT : base and large.",
"The base version has 12 layers/heads with 768 hidden neurons.",
"It is trained for 600k steps with a batch size of 4k, learning rate of 1e-4 and warmup rate of 25k steps.",
"The large version has 48 layers and 24 attention heads with 1024 hidden neurons.",
"It is trained for 500K steps with a learning rate of 3e-4, warmup of 70K steps and batch size of 8k.",
"Pretraining Details Following the common practice, we pretrain our model on 2TB text extracted from a mixture of THUCTC 12 , Chinese Wikipedia and Common Crawl.",
"Models are trained on 64 NVIDIA V100 (32GB) GPUs with FP16 and ZERO-stage-1 optimization (Rasley et al., 2020).",
"To make better use of the GPU, we train our model with PatricStar 13 which applies a dynamic memory scheduling with a chunk-based memory management module (Fang et al., 2021).",
"The memory management offloads everything but the current computing part of the model to CPUs.",
"This results in training a much larger model within the same hardware environment.",
"The chunk-based memory management takes advantage of the linear structure of the transformer-based model, so that it will inherently prefetch the upcoming layers to GPUs.",
"Baseline Models We compare our model with SOTA pretrained Chinese models: (1) MBert-Chinese (Devlin et al., 2019), (2) Bert-wwm (Cui et al., 2019), (3) MacBert (Cui et al., 2020), (4) Ernie-gram (Sun et al., 2019, 2020b) and (5) ChineseBert (Sun et al., 2021).",
"BERT-wwm continues pretraining from MBert-Chinese with the Whole Word Masking pretraining strategy.",
"MacBERT applies the MLM-As-Correlation (MAC) pretraining strategy as well as the sentence-order prediction (SOP) task.",
"ERNIE-gram adopts various masking strategies including token-level, phrase-level and entity-level masking to pretrain BERT on largescale heterogeneous data.",
"Chinese-Bert is pretrained with the glyph and phonetic features.",
"Tasks We test our model on 5 standard Chinese NLU tasks and one toxic detection tasks.",
"The 5 NLU tasks are: (1) ChnSentiCorp, Chinese sentiment classification with 2k training data 14 , (2) TNEWS: news title classification with 50k training data, (3) AFQMC: question matching with 34k training data, (4) CSL, keyword recognition from paper abstracts with 20k training data ChnSentiCorp: 2k (Xu et al., 2020) and (5) CMNLI, Chinese Multi-Genre NLI with 390k data (Conneau et al., 2018).",
"Toxic detection can server as a task with human-made\" attacks in contrast with the synthesized ones.",
"It is collected from user interactions (written) with a popular online conversational platform, where users sometimes use various manmade attacks to avoid automatic system filtering of junk ads, porn and abusive information.",
"We manually annotate 50k user inputs and identify 2k toxic contents (positive), out of which 90% are in adversarial forms.",
"We randomly sample 2k negative text then split the whole into train/dev/test with 8:1:1.",
"Attacker We test the model performance under three different attackers (all untargeted as we do not need restrictions to the target class): (1) ADV , our own attacking algorithm, (2) TextFooler (Jin et al., 2020), a black-box algorithm replacing important words with semantically similar ones and (3) Argot (Zhang et al., 2020), a black-box attacking algorithm considering Chinese-specific features.",
"We set the maximum attacking ratio for all the three algorithms as 20% .",
"TextFooler is originally designed for English, we reimplement it with corresponding pretrained Chinese-version models.",
"14 We use the small version of training data to test the few-shot capability of models.",
"Chinese NLU Results We show the results on 5 Chinese NLU tasks in tables 2 to",
"6. For every task, we report the model accuracy measured in the clean testset and the adversarial testsets under 3 adversarial algorithms ADV , TextFooler and Argot .",
"We report the performance of all base-version models for a fair comparison.",
"We select the best-performed base-version model to test its large-version performance and compare it with ROCBERT .",
"As can be seen, our attacking algorithm ADV do not affect much on TNEWS, AFQMC and CSL because they rely more on the global sentence structure instead of individual words.",
"On tasks like sentiment classification and NLI, single words contribute mostly to the model decision and therefore the attacking can lead to significant performance drop.",
"Argot and TextFooler lead to more drop compared with ADV because they explicitly select words that affect the model decisions most while ADV selects words to attack based on the general language model scores.",
"Argot is more effective than TextFooler because it tailors its character replacement to consider Chinese-specific features.",
"Overall ROCBERT outperforms other models over all attacking algorithms on all the 5 tasks .",
"Even in the clean dataset, it performs the best on 4 out of the 5 tasks.",
"ChineseBert performs the second under various attacks because it also considers multimodal features during its pretraining same as ROCBERT , which further confirms the importance of using mulimodal features in Chinese language pretraining.",
"Toxic Content Detection Results We train all models in the toxic content detection task.",
"As can be seen in Table 7, ROCBERT outperforms all other models over 4 metrics .",
"This confirms the its effectiveness at capturing the true semantics regardless of its adversarial form.",
"The difference Model Clean ADV TextFooler Argot Base MBert 56.84 53.76 42.05 40.18 Bert-wwm 57.44 54.12 45.25 40.76 MacBert 57.53 54.41 45.10 41.94 Ernie-gram 57.30 52.58 43.02 41.16 ChineseBert 57.65 55.74 51.01 50.27 RoCBert 58.64 57.14 52.05 52.21 Large ChineseBert 59.65 55.92 50.75 51.83 RoCBert 59.98 59.17 54.74 54.46 Table 3: Performance on TNEWS Model Clean ADV TextFooler Argot Base MBert 74.07 72.04 57.69 51.24 Bert-wwm 75.07 72.40 57.58 51.05 MacBert 74.79 72.08 57.37 50.78 Ernie-gram 75.42 71.07 56.81 50.34 ChineseBert 73.77 72.59 57.92 52.41 RoCBert 75.48 74.11 62.95 62.16 Large Ernie-gram 76.35 70.92 58.04 50.64 RoCBert 77.48 76.43 65.85 64.97 Table 4: Performance on AFQMC among models is smaller because they have all been finetuned on this task.",
"All models can get adapted to different forms of attacks in the training process while the tables 2 to 6 are testing the zeroshot generalization to unknown attacks.",
"Defending Method Comparison We further compare ROCBERT with two other popular ways of defending adversarial attack: (1) run a spell-checker before fed to the model and (2) adversarial training ( advtrain ) which augments training data with adversarial examples.",
"We add these two de-926 Model Clean ADV TextFooler Argot Base MBert 81.83 78.28 61.06 52.40 Bert-wwm 81.50 79.08 61.68 53.41 MacBert 81.97 78.34 61.75 52.35 Ernie-gram 82.70 79.53 63.54 53.66 ChineseBert 81.77 78.69 61.27 53.79 RoCBert 83.83 82.56 69.29 63.07 Large Ernie-gram 83.05 79.42 61.85 57.43 RoCBert 85.28 83.59 70.13 66.38 Table 5: Performance on CSL Model Clean ADV TextFooler Argot Base MBert 80.53 69.57 50.21 45.52 Bert-wwm 80.79 68.54 50.46 44.26 MacBert 81.01 69.94 49.86 42.07 Ernie-gram 82.22 68.83 50.77 44.69 ChineseBert 81.42 72.27 52.85 47.15 RoCBert 81.27 74.14 59.95 55.17 Large Ernie-gram 82.36 70.11 52.45 45.82 RoCBert 82.38 76.83 60.26 56.64 Table 6: Performance on CMNLI fending methods on top of the best-performed base model (on clean testsets) in different tasks: ChineseBert for TNEWS, Ernie-gram for AFQMC, CSL and CMNLI, MacBert for ChnSentiCorp.",
"We apply the spell-checker in Cheng et al. (2020).",
"The results are visualized in Figure 2.",
"We can see that spell checking improves the performance only marginally and sometimes even hurt the performance (best-other under ADV in CI).",
"The reason could be that the spell checker performs poorly for out-of-domain adversarial examples.",
"The errors could be propagated and further reduce the performance.",
"advtrain can significantly benefit the performance, but note that it explicitly peeps\" at the adversarial algorithm applied in the testset while ROCBERT is not aware of the testing adversarial algorithm.",
"Nevertheless, it is still comparable and in some cases even outperforms advtrain .",
"By combining ROCBERT and advtrain, the model robustness can be further improved.",
"We perform a set of ablation studies to understand the choice of different components in ROCBERT .",
"All models in this section are pretrained with the same base architecture and hyperparameters for one epoch on 1M sampled training text then tested in TNEWS.",
"The results are shown in Table 8.",
"Loss To study the effects of the loss function used in the pretraining stage.",
"We tried two other settings: (1) contrastive only , where the model is pretrained only with the contrastive learning loss in Eq 5 and (2) MLM-only , where the model is pretrained only with the MLM objective as in standard Bert.",
"We can see that both options lowers down the model performance.",
"By combining both loss, the model can be robust under adversarial attacks without affecting the performance in clean data.",
"Tokenization It has been widely demonstrated that char-based tokenization is preferred for Chinese characters (Li et al., 2019b), but it is rather unclear how we should model pinyins and non-Chinese words.",
"We try different tokenization methods for non-Chinese characters: (1) bpe (Sennrich et al., 2016).",
"We set the vocabulary as 20k and train the split on the training data (after convert-927 5% 10% 15% 20% 25% 0 .",
"ing all Chinese characters into pinyin).",
"(2) char-cnn (Zhang et al., 2015), which process each character individually but get the pinyin embedding with a char-cnn.",
"The best setting in ROCBERT used char-sum which processes each character individually and set the pinyin embedding as the sum of its character embeddings.",
"We can see that bpe hurt the performance.",
"This might be because the bpe split is trained on clean data only.",
"For adversarial examples, the letters in pinyins can be easily perturbed and break its vocabulary.",
"Char-based tokenization is more robust under adversarial attacks.",
"Char-cnn does not lead to improvement here, probably because there are a limited combination of letters in Chinese pinyins ( 400 ), each pinyin can usually be uniquely identified by its bag of characters without the need of order information.",
"Multimodal feature We tried removing the visual feature pretraining as mentioned in 4 and observe the performance drop (-vis-pretrain).",
"It is even worse than removing the visual feature completely (-vis), suggesting the pretraining for visual features is essential, without which the model can be hard to learn meaningful visual features.",
"The phonetic feature is less crucial than visual features but also brings positive improvement.",
"By adding a pretraining stage for the phonetic features too (+pho-pretrain), the improvement is very marginal.",
"As the phonetic features are also based on character embeddings, it might be easier for the model to automatically learn the phonetic features compared with the visual features.",
"Multimodal integration We compare our proposed layer-insert with three other ways of integrating multimodal features: (1) sum (Liu et al., 2021), which sums the multimodal embeddings, (2) concatenation , which concatenate(Sun et al., 2021), which concatenate the multimodal embeddings then fuse with an MLP layer, (3) two-step (Xu et al., 2021), which first determine the weight of different embeddings then fuse to the encoder.",
"We can see that ROCBERT performs best with only marginal computational overhead by updating the encoder representation in one layer.",
"Insert Layer We further analyze the effects of the insertion layer.",
"Our best setting inserts the multimodal features in layer 1 for the base model and layer 3 for the large model.",
"From Table 8, we can see that when inserting them in the upper layer 4,7 and 10, the performance gradually drops, suggesting an earlier insert is helpful for the model to incorporate these features in-depth.",
"However, inserting them in layer 0 is also worse since the model can only learn weight among multimodal features solely from bag of words.",
"Attacking Algorithm We change the settings in our attacking algorithm to see the effects in Figure 3.",
"We can see the attacking ratio can neither be too small nor too large.",
"15% is a sweet spot for pretraining.",
"The Gaussian noise added in Eq 1 also brings positive effects consistently, suggesting we should not use a fixed attacking ratio in the pretraining stage.",
"The character selection is also crucial and removing it significantly reduces the performance.",
"To show whether it is necessary to adopt our attacking algorithm with complex combinations of attacking forms.",
"We further compare with pretraining the model with SimCSE (Gao et al., 2021), an algorithm which uses drop out as the noise instead of our adversarial examples.",
"We can see that SimCSE is rarely helpful under different attacks.",
"This suggests it is important to define rule-based attacking algorithms to better fit the real-world attacks.",
"General drop-out regularizations cannot adapt well to complex real-world attacks .",
"We present ROCBERT : the first pretrained Chinese language model that is robust under various forms of adversarial attacks.",
"It is pretrained with the multimodal contrastive learning objective and achieves the best performance on 5 Chinese NLU tasks un-928 der three different attacking algorithms without negative effects on clean testsets.",
"It also significantly outperforms the others in the toxic content detection task.",
"Extensive ablation studies are provided to benefit future research."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain"
] |
[
"South Asia is home to a plethora of languages, many of which severely lack access to new language technologies.",
"This linguistic diversity also results in a research environment conducive to the study of comparative, contact, and historical linguisticsfields which necessitate the gathering of extensive data from many languages.",
"We claim that data scatteredness (rather than scarcity) is the primary obstacle in the development of South Asian language technology, and suggest that the study of language history is uniquely aligned with surmounting this obstacle.",
"We review recent developments in and at the intersection of South Asian NLP and historicalcomparative linguistics, describing our and others' current efforts in this area.",
"We also offer new strategies towards breaking the data barrier.",
"South Asia 1 is home to one-quarter of the world's population and boasts immense linguistic diversity (Saxena and Borin, 2008; Bashir, 2016).",
"With members of at least four top-level major linguistic families 2 and several putative linguistic isolates, this region is a fascinating arena for linguistic research.",
"The languages of South Asia, moreover, have a long recorded history, and have undergone complex change through genetic descent, sociolinguistic interactions, and contact influence.",
"Nevertheless, South Asian languages for the most part remain severely underdocumented (van Driem, 2008), and several languages with even official administrative status (e.g. Sindhi) are low-resource (if not data-scarce) for the purposes of all natural language processing tasks (Joshi et al., 1 Roughly the Indian Subcontinent, or the geographic and cultural region enclosed by the Himalayas, the Indian Ocean, and the Hindu Kush.",
"2 Indo-European (Indo-Aryan, Iranic, Nuristani), Dravidian, Austroasiatic (Munda, Khasian), Sino-Tibetan (several branches).",
"2020).",
"This data scatteredness persists despite long native traditions of linguistic description, continued language vitality with active use on the internet, and vast numbers of speakers (Rahman, 2008; Groff, 2017).",
"We argue that the most basic problem in NLP/CL work on South Asian languages is not data scarcity, but data scatteredness .",
"There is much data to be extracted for even the most endangered languages (e.g., Burushaski, a language isolate of the north-west), from annotated corpora and grammatical descriptions compiled by linguists, if only one is willing to wrangle idiosyncratic data formats and digitise existing texts.",
"Thus far, commercial interests and scientific agencies have only intermittently supported the development of language technology for the regiontaking a new approach, we propose a research programme from the perspective of computational historical linguistics, outlining current data gathering initiatives in this discipline and potential benefits to other work across NLP.",
"Narrowing the low-resource category.",
"Low-resource languages have recently gained attention in NLP/CL research, both due to the engineering problems of a data-scarce context and also in recognition of the historical focus on English in the field to the detriment of other languages (Hedderich et al., 2021; Ranathunga et al., 2021).",
"This has been accompanied by debate on what languages the label encompasses (e.g. Hmlinen, 2021).",
"In the South Asian context, even Hindi has been labelled low-resource in some recent work.",
"While it is true that for certain tasks a large institutionally-backed language like Hindi can be low-resource, we propose that low-resource' languages can be better described with two kinds of situations: Data scatteredness : Data is available (per-haps even abundant), but due to issues in digitisation, cataloguing, and labelling and annotation it has not been leveraged to its full potential.",
"Data scarcity : Data is not available or very limited to begin with, and without collecting or creating new data we do not have enough to work with.",
"The state of NLP in South Asia.",
"So far, initiatives for improving language technology in South Asia have largely focused on data-scattered (not data-scarce) languages with official status and some degree of standardisation.",
"These include cross-lingual projects such as IndicNLPSuite (Kak-wani et al., 2020), the EMILLE corpus (McEnery et al., 2000), and iNLTK (Arora, 2020), and workshops like DravidianLangTech (Chakravarthi et al., 2021) and WILDRE (Jha et al., 2020).",
"As table 1 shows, only a select few languages benefit from NLP researcheven fewer benefit from (com-mericialised) products like Google Translate or OCR tools.",
"Truly data-scarce langauges (e.g. Kangri, Tulu) lack instituational status and have been largely unstudied because the challenges are different and harder to surmount.",
"NLP/CL has proven to be an expansive field as of late.",
"Computational historical linguistics is inextricably linked with computational approaches to fundamental linguistic tasks: corpus building, POS tagging and dependency parsing, morphological analysers, and lexical databases.",
"Work on these has progressed fast for the big languages.",
"For example, Hindi, the highest-resourced South Asian language, has massive hand-annotated dependency treebanks (Bhatt et al., 2009), state-of-the-art neural distributional semantic transformer models (Jain et al., 2020; Khanuja et al., 2021), and machine translation models to and from English (Saini and Sahula, 2018).",
"This is not to say that there are no resources at all for the languages Joshi et al. (2020) terms the Left-Behinds.",
"Linguists, for example, have compiled rudimentary treebanks for many languages, simply waiting to be digitised and converted to a multilingual format like Universal Dependencies; these include Palula (Liljegren and Haider, 2015) and Toda (Emeneau, 1984), which are yet to be the subject of any NLP research work.",
"There are also new treebanks in Universal Dependencies for Kangri, Mandeali, Bhojpuri (Ojha and Zeman, 2020), and Magahi.",
"Historical/comparative linguistics.",
"Historical linguistics is concerned with describing change of all kinds (phonological, morphological, syntactic, etc.) in language over time and the factors (so-cial, cognitive, evolutionary) that contribute to that change.",
"Comparative linguistics aims to use this historical study to relate languages and reconstruct earlier stages and common ancestors of related languages (Campbell, 2013).",
"The study of historical and comparative linguistics has a long history in South Asia, beginning well before similar threads of inquiry in the Western linguistic tradition, with grammarians like Pan.ini (c. 5th century BCE) and Hemacandra (10881173) analysing historical and dialectal language from a comparative perspective.",
"Following the recognition by Western philologists of an Indo-European language family that includes Sanskrit, comparative study of the languages of South Asia began in earnest.",
"As a result, 1397 several comprehensive comparative grammars featuring the Dravidian (Caldwell, 1856; Andronov, 2003; Krishnamurti, 2003) and Indo-Aryan families (Beames, 1872; Hoernl, 1880; Bloch, 1934; Masica, 1993) have appeared in the years since.",
"Emeneau (1956) was the first to posit a South Asian zone of language contact and convergence spanning multiple families.",
"Subsequent work on micro-areal zones has yielded many insights into the nature of linguistic interactions in the region (Peterson, 2017; Liljegren et al., 2021; Toulmin, 2006).",
"The sole South Asia-wide linguistic data collection effort ever be undertaken was the Linguistic Survey of India , completed about a century ago (Grierson, 19031928).",
"To date, there has been no comparable centralised data resource on South Asian languages of its magnitudecovering typological features, the lexicon, and sociolinguistic phenomena.",
"Data in the earliest comparative works was frequently sourced from high-prestige standard varieties like Delhi Hindi, with progress on studying and collecting data from more localised lects largely proceeding in isolation.",
"Compilation of comparative data continued sporadically throughout the 20th century, resulting in works such as the Comparative Dictionary of the Indo-Aryan Languages (Turner, 19621966) and the Dravidian Etymology Dictionary (Burrow and Emeneau, 1984) which attempt at a more diverse spectrum of language data.",
"Meanwhile, progress on documentation and comparative analysis of the Austroasiatic (Anderson, 2008), Sino-Tibetan, and isolate languages (e.g. Burushaski, Nihali, Kusunda) of South Asia is still in its infancy.",
"As a consequence, studies drawing upon their data for purposes such as substrate analysis often lack nuance and family-internal consistency.",
"Having established the issue of data scatteredness, the mutual benefit inherent to data collection (for historical/comparative linguistic work and other NLP tasks), as well as possible interesting avenues for future research, we present a compilation of our ongoing projects in this direction, most involving languages that have not been studied in NLP before.",
"Structured, syntactically-parsed corpora are not only essential for (1) downstream NLP tasks such as information extraction (Gamallo et al., 2012) and semantic role labelling (Li et al., 2019), but also have the potential to (2) aid quantitative comparative and historical linguistic study .",
"Parsing according to several formalisms is possible, though dependency formalisms in particular are better equipped to handle the flexible word-order characteristic of many South Asian languages (as-suming the parsing algorithm used adequately handles non-projective dependency trees 3 ) (Palmer et al., 2009).",
"Multilingual dependency formalisms such as Universal Dependencies (UD) (Nivre et al., 2016) have established consistent guidelines for the annotation of binary dependency relations, morphology, and other linguistic features, resulting in the recent appearance of treebanks for several data-scarce languages of the region (Bhojpuri, Kangri, etc.) as well as their older diachronic stages (Vedic and 3 For vertex set V , weighted edge set E { i w j i , j R , w R } , and root V , let G = ( , V , E ) be a rooted weighted directed graph.",
"A dependency tree is a spanning subgraph D = ( , V , E ) , E E subject to the following well-formedness constraints (Zmigrod et al., 2020): (C1) Each non-root vertex of D has one incoming edge (C2) D is acyclic (C3) Root of D has exactly one outgoing edge In other words, dependency trees are arborescences (di-rected, rooted trees) equipped with the root constraint (C3).",
"Graph-based parsing algorithms find the optimal dependency tree D , that is, the dependency tree D with maximum total edge weight in the set of all possible dependency trees D ( G ) , for a given sentence (maximum weight spanning ar-borescence).",
"A treebank is a corpus of such dependency trees.",
"Towards the second goal listed above, Farris and Arora (2022) compiled a UD treebank for the Ashokan Prakrit dialect continuuma parallel corpus of 14 pillar/rock inscriptions in six Middle Indo-Aryan (MIA) dialects dating back to the 3rd c.",
"BCE.",
"As the first study of MIA from a computational perspective, this work calls for an analysis of Indo-Aryan regional fragmentation through dialec-tometry, approaching contentious linguistic issues with statistical arguments curated using treebank data.",
"In a similar vein, we are currently working towards filling other chronological gaps in corpora (e.g. the Old Sinhala Sgiri Graffiti of the Early New Indo-Aryan stage) through treebanking in parallel with their modern stages (e.g. Sin-hala).",
"To the best of our knowledge, we are unaware of any studies involving such diachronic transfer frameworks, where knowledge transfer between two historically-separated stages of the same language can be used to dependency-parse a given stage using resources from the other.",
"Other historically-attested langauges we plan to include in this pipeline include Old Kashmiri, Old Maldivian, and Old Tamil.",
"In terms of modern South Asian languages, there has been recent diversification from combined efforts, such as an upcoming dependency parsing shared task at the WILDRE 2022 workshop based on new treebanks (Nallani et al., 2020; Ojha and Zeman, 2020).",
"Multilingual dependency parsing.",
"More broadly, we are interested in cross-lingual transfer models (Duong et al., 2015; Guo et al., 2015; Schuster et al., 2019) as a means of expediting dependency parsing for data-scarce South-Asian languages.",
"A similar approach for Uralic languages is (Lim et al., 2018).",
"They propose a dependency-parsing model for North Saami and Komi using annotated corpora and bilingual word-embeddings from high-resourced genetically related (Finnish) and typologically similar Figure 4: A map of languages included in Jambu, colour-coded by subfamily designation with point-geometry variation by diachronic stage.",
"(Russian) languages, without the requirement of extensive parallel texts for training.",
"They conclude that while genetically related pairs (KomiFinnish, North SaamiFinnish) allow for highly efficient parsing, pairs of unrelated languages in contact (KomiRussian) also provide valuable input for further correction.",
"Given the languages of South Asia exhibit common typological features by virtue of sharing a linguistic area, treebanking efforts will undoubtedly beneft from a multilingual dependency parsing approach.",
"Languages like Sindhi, Punjabi, and Sinhala, which have genetic relatives and contact languages that are comparatively more resourced, are our immediate targets for such efforts.",
"One of our major efforts in data-collection for the region has been the Jambu project.",
"Jambu is a compiled cognate lexicon of all South Asian languages, cutting across phylogenetic groupings and historical language stages.",
"It has a web interface online at https://neojambu.glitch.me/ .",
"It includes data parsed and compiled from the Uni-1399 S a n s k r it P r a k r it T a m il H i nd i M a r a t h i K a nn a d a T e l ugu P un j a b i P a li G u j a r a ti M a l a y a l a m S i nh a l a O d i a G ond i S i ndh i N e p a li T u l u W e s t P a h a r i B e ng a li L a hnd a K u m a on i A ss a m e s e K a s h m i r i K u w i K o t g a r h i K o t a T od a M a it h ili K u i B i h a r i K ho w a r W a i g a li P a l u l a K od a gu S h i n a K ond a K u r ux M a lt o P a r ji K o l a m i G a d a b a A s hkun P a s h a i D h i v e h i B h a l e s i K a m v i r i R o m a n i B ho j pu r i O l d Aw a dh i K u t c h i Language 0 5000 10000 15000 L e mm a s Languages in Jambu Figure 5: Top 50 languages by number of lemmas included in the Jambu database, colour-coded by language family (green = Indo-Aryan, red = Dravidian, blue = Nuristani).",
"versity of Chicago's Digital Dictionaries of South Asia project (Turner, 19621966; Burrow and Emeneau, 1984), existing web databases (Liljegren et al., 2021; Strand, 19972021), and individual articles and theses (Toulmin, 2006; Jouanne, 2014), totalling 294 lects and 202,653 lemmas.",
"Some of these sources have been used in previous work on South Asian historical linguistics, e.g. Cathcart and Rama (2020); Cathcart (2019b,a, 2020)this is the first attempt to consolidate them.",
"Note some previous work in this direction: while the SARVA project (Southworth, 2005) did not reach fruition, a searchable database of Dravidian cognates was developed by Suresh Kolichala under its auspices.",
"4 Past etymological research in South Asian languages was primarily focused on internal comparisons within linguistic families.",
"Unknown etyma was often blindly attributed to Dravidian or Munda without comprehensive cross-linguistic analyses.",
"5 In fact, we find a large number of common words in languages of several families with uncertain origin, possibly substrate loans from undocumented languages.",
"6 In order to provide reliable data for the robust reconstruction of the history of the ancient 4 http://kolichala.com/DEDR/ 5 Recent comparative work on Munda and Indo-Aryan contact such as Ivani et al. (2020) in general find very limited influence of Munda, restricted primarily to the (eastern) IndoAryan languages in close proximity with them.",
"Prior work had a tendency to exaggerate the impact of Munda to explain unusual features of other Indo-Aryan languages; notably, Witzel (1999), who advocated for a historical Para-Munda' family that influenced Indo-Aryan as far as in the northwest, the historical location of Rigvedic Sanskrit.",
"6 Dr. Felix Rau (p.c.) terms these unattested substrate(s) the big X of South Asian linguistic history', and other (possi-ble same) substrate(s) responsible for words reconstructable to Proto-Munda without secure cognates in other Austroasiatic branches the big Y '.",
"Consolidating Indo-Aryan data.",
"While Turner (19621966) and its supplements remain the undisputed gold standard for Indo-Aryan comparative etymologies, many later works on individual languages have considerably expanded our knowledge of cognate relations in underdocumented languages; e.g. Liljegren et al. (2021); Toulmin (2006); Zoller (2005).",
"Inclusion of data from these newer works is ongoing.",
"We also expanded coverage of the isolated and linguistically archaic Nuristani lects (Strand, 19972021), which are contended not to be Indo-Aryancomparative lexical data will help cement their exact phylogenetic status.",
"Updates to Dravidian data.",
"A Dravidian Etymological Dictionary published by Burrow and Emeneau (1984) (2nd edition; abbreviated DEDR) remains the latest effort to gather etymological data on Dravidian.",
"Although Krishnamurti (2003) provides reconstructions for about 500 entries, systematic historical reconstruction for all known cognates of Dravidian is still pending.",
"Subrahmanyam (2011) published an update to the DEDR utilizing new data on several non-literary languages that became available after 1984.",
"Recent fieldwork on several non-literary languages have produced grammars with new vocabulary lists, providing rich data to be updated in DEDR.",
"In addition, several dictionaries with attempted etymologies for many literary languages have appeared since 1984, and can become a source for the realignment of cognates as well as new additions.",
"data-scarce languages that lack adequate corpora on the web.",
"Similar work in this area is the pan-lingual CogNet (Batsuren et al., 2019), and also earlier WordNets (Miller, 1995).",
"Cognate data can be used for transfer learning, where a data-scarce language can map onto existing models for higher-resource languages, such as a distributional semantic model which generally requires massive corpora to train (Sharoff, 2017).",
"Typological data in general offers modest improvements in performance on a variety of NLP tasks (Ponti et al., 2019).",
"Unified transcription.",
"Since many languages of South Asia are unwritten or are lacking standardised orthographies (even in their respective linguistics works), we developed a preliminary system for phonemic transcription of all South Asian languages, which all our cognate data will be converted to.",
"For cognate identification and reconstruction work (both by humans and using NLP tools), a unified phonemic representation is important.",
"This system combines features of the International Alphabet of Sanskrit Transliteration (IAST) 7 with IPA and Americanist phonetic transcription systems.",
"Future work will outline it in depth, along with examples of its focus on cross-family diacritical consistency.",
"One of our main objectives for building extensive comparative lexical and grammatical databases is to ensure credible data from up-to-date, modern sources are available to researchers working on comparative and diachronic linguistics in the South Asian linguistic area.",
"Historical linguistics work needs data, and in South Asia too much work has progressed without including data from non-standardised (even if documented) languages, to the detriment of our understanding of South Asian linguistic history post-Sanskrit (Pystenen, 2022).",
"Below, we highlight two such projects we are currently engaged in involving three data-scarce languages of northern Pakistan: Burushaski, Gawri, and Torwali (Torwali, 2018).",
"The languages of northern Pakistan have been synchronically analyzed to have phonemic tonal contrasts.",
"Baart (2003) has classified such tonal languages into three broad groups based on the type of tonal contrast displayed: 7 https://en.wikipedia.org/wiki/International_ Alphabet_of_Sanskrit_Transliteration Shina-type : Shina varieties, Palula, Indus Kohistani (all Indo-Aryan), Burushaski (isolate) etc.",
"Punjabi-type : Punjabi, Hindko, some Gujari varieties, extending into the Himachali languages of northern India, as well as Kisht-wari, 8 which is usually classified as a divergent dialect of Kashmiri (all Indo-Aryan).",
"Kalami-type : Gawri (Kalami), Kalkoti and Torwali (all Indo-Aryan) and possibly other undiscovered varieties of the area.",
"To these, one may also add the simpler accentual systems of Kalasha-mon (Heegrd-Petersen, 2015) and Khowar (Liljegren and Khan, 2017), which we term Chitrali-type .",
"The tonal system and the historical mechanism of tonogenesis is broadly understood for Punjabi proper and some Hindko varieties (Shackle, 1980; Bashir and Conners, 2019; Bhatia, 2013), but specifics for individual varieties further east (Kishtwari and Himachali) remain underdescribed (Hendriksen, 1986; Jouanne, 2014).",
"This system arises primarily from the disappearance of phonemic breathy voice, but the phonetic specifics differ from language to language.",
"The Shina-type tonal system is both the best described and the best understood diachronically.",
"It continues the Vedic (hence Indo-European) pitch-accent system subject to later changes necessitated by regular apocope (Liljegren, 2008, 2016; Kmmel, 2015).",
"Vedic pitch-accent is also partly continued by the Chitrali-type accentual system (Heegrd-Petersen, 2012), though less conservatively.",
"The tonal diachrony of the Kalami-type system, on the other hand, has not yet been fully understood.",
"Part of the reason is that this system is considerably more complex than the other three accentual systems, contrasting as many as five distinct tonemes (Baart, 1997; Lunsford, 2001; Liljegren, 2013).",
"In ongoing work, based on the Gawri data compiled from Baart (1997, 1999); Baart and Sagar (2004); Baart et al. (2004), we are investigating the origin of the system, and will be appended in the future by Torwali data we are now collecting.",
"Morphology in NLP.",
"In addition to working out the history of the Kalami-type tonal system, we intend to incoporate our annotated lexical dataset into the UniMorph database (Kirov et al., 2018).",
"The morphology of Gawri and Torwali marks gender, 8 Not mentioned in Baart (2003), but independently identified by one of the present authors.",
"number and case for nouns and adjectives primarily by tonal changes and vowel alterations (historical umlaut) unlike other Indo-Aryan languages which use suffixation, though they still encode much the same categories and do not behave any different syntactically either.",
"This makes them prime targets for testing out computational methods for morphological analysis, especially to compare performance vis--vis a related language like Hindi that has a similar grammar but different morphological profile.",
"UniMorph has only a few South Asian languages thus far, as shown in figure 6this is part of a broader project to expand coverage in the region, using existing morphological data stored in analysers (e.g. for Sindhi, Motlani et al., 2016) and grammars (e.g. for Palula, Liljegren et al., 2021).",
"In this vein, we also mention that UniMorph has only a handful of languages that signal morphological alterations tonally.",
"So, our contribution will also improve typological diversity in the database to a considerable extent.",
"Our understanding of the linguistic pre-history of South Asia is heavily reliant on disciplined studies of the histories of the non-Indo-European languages of the subcontinent.",
"This is primarily because while we do have reliable estimates on the time-frame of Indo-European migration into the subcontinent, for the families endemic to the region (including isolates) analogous dating is not possible.",
"Burushaski, spoken in a few mountain valleys of the Karakoram, is among these endemic languages of South Asia.",
"It has attracted quite a bit of scholarly attention since its academic discovery as it stands out both typologically and genealogically in its current neighborhood (cf.",
"the latest descriptive grammars Berger (1974, 1998); Munshi (2018); Yoshioka (2012)).",
"The history of the language and its speakers is virtually unknown until the first linguistic documentation in the mid-nineteenth century.",
"The first secure pre-modern attestation of Burushaski speakers is in Tibetan chronicles dating from the ninth century where a people bru-`za or bru-`sa to the west of Tibet find mention (Jschke, 1881).",
"9,10 As of now, both major varieties of Burushaski are well-documented, but there has been precious little comparative work done.",
"The dictionaries in Berger (1974, 1998) lay the foundation of comparative studies by identifying several layers of potential loans in the language, cf.",
"also Rybatzki (2010).",
"Conversely, potential Burushaski interaction with and influence on the older stages of Indo-Iranian have been explored in Tikkanen (1988); Kmmel (2018), the former mainly dealing with how Burushaski broadly fits into the South Asian linguistic zone.",
"A handful of Burushaski loans in Purik Tibetan are identified in Zemp (2018), not all of them convincing, and Steblin-Kamenskij (1999) contains shared lexemes with Wakhi.",
"More speculative are the claimed Burushaski loans in (Proto-)Romani collected in Berger (1959), believed to be borrowed before the Roma migrated westward toward Europe (presumably) through Burushaski territory.",
"However, all these studies share a common drawback in that we do not yet have a principled way of identifying Burushaski lexemes or grammatical features.",
"A first step toward this goal is Holst (2014), where the author attempts an internal reconstruction of Burushaski through a comparative lexical and morphological study of the two main dialect groups of Yasin and HunzaNager.",
"Holst's work, though, is still just a preliminary investigation and there is much to be added and improved on.",
"In particular, the book does not undertake a systematic study of loanwords to and from neighboring languages as previous areal studies involving Burushaski have, nor does it exhaustively utilize the 9 We are grateful to Dr. Diego Loukota (p.c.) for informing us that a short text in the bru` sa language is also attested in Tibetan records with translation in Sanskrit.",
"We are, however, not aware of any scholarly attempt to interpret said text through modern Burushaski.",
"10 It is also possible that an older ethnonym recorded as Sanskrit muja, maujavataand Avestan muarefer to the same people but that is harder to establish.",
"descriptive literature available resulting in a few avoidable but significant errors of interpretation (Munshi, 2015).",
"This is a major shortcoming because external comparisons are a vital component to reconstructing the histories of language isolates and smaller families, cf.",
"Trask (2013) for Basque and Nikolaeva (2011) for Yukaghir, among others.",
"Computational reconstruction.",
"We have already started a principled reconstruction of Proto-Burushaski building on Holst's work, but utilizing more sources and laying a greater emphasis on loanword etymologizing and chronologizing.",
"Our databases, compiled from available lexical and descriptive sources, are intended to aid this goal of comparative analysis, as well as to make data from Burushaski and neighboring languages available to other researchers.",
"Proto-language reconstruction is an interesting task in computational historical linguistics, and so far work has been under way in a supervised setting on known, high-quality cognate data across related languages, e.g. on Romance languages (Ciobanu and Dinu, 2018; Meloni et al., 2021).",
"Low-resource dependency corpora.",
"In addition, starting with annotated texts from descriptive grammars, we plan to build a dependency treebank for Burushaski as described in 3.1.",
"Burushaski is a low-resourced language in the sense that its domain of use is very restricted and there is no readily available internet corpus one can subject to sophisticated (computational) linguistic analyses automatically.",
"However, as mentioned before, there has been a steady stream of quality descriptive work on it and all published grammars come with a wealth of oral texts one can build a functional corpus withindicating some data-scatteredness that can be leveraged.",
"The data resources we are in the process of compiling for South Asian languages will enable a variety of research to be conducted into language history.",
"We lay out some of the immediate potential pathways for this further research in hopes of stimulating work in this area.",
"A substrate language is one that loans words into a language of higher prestige.",
"A perennial question in South Asian language history for at least a century has been the Indus Valley Civilisation inscrip-tionary corpus, and the problem of deciphering it (if it even encodes a language) and whether it belongs to a known language family of South Asia or something else entirely (Farmer et al., 2004; Fairservis, 1983).",
"Notably, in the mid-20th century a team of Finnish and Soviet linguists and computer scientists claimed evidence that the Indus inscriptions represent a Dravidian language (Parpola, 1986).",
"Recent computational information-theoretic work also suggests language-like properties in the text, a subject of subsequent vociferous debate (Rao et al., 2009, 2010).",
"A serious issue is that we do not have sufficiently diverse data from modern languages of the region against which to compare any purported decipherments of the Indus script (e.g. Proto-Dravidian reconstruction is as of now still in a preliminary stage), and thus even if the Indus language provided any substrate loans into modern families, we would be unable to comprehensively list out possible candidates.",
"The Jambu database can help inform research on substrate contact in the languages of the region.",
"One of the major bottlenecks in compiling existing linguistic data on South Asian languages is that it remains machine-unreadable.",
"For example, many linguistics theses completed at Indian universities have recently been digitised and uploaded to Shod-hganga, 11 but most are scanned images in PDF format.",
"Optical character recognition (OCR) of such texts also requires difficult parsing of diacritics and low-resource scripts.",
"A recent initiative to digitise old linguistic data is the digitisation of the Linguistic Survey of India (Grierson, 19031928) under the project South Asia as a linguistic area?",
"Exploring big-data methods in areal and genetic linguistics (Borin et al., 2020, 2018, 2014).",
"Using OCR and subsequent information extraction from the text, Borin et al. have shown that old data still has much to tell for the computational study of typology and comparative linguistics.",
"Future work on extracting data from non-digitised South Asian language sources will have to use OCR, possibly a neural model finetuned for the purposes of our domain on a platform like Tran-skribus (Kahle et al., 2017).",
"Hmlinen (2021), calling for the NLP community to make a consistent distinction between en-dangered and low-resource languages, implores researchers to stop complaining about how low-resourced [a language] is, [and] get up and gather the data.'",
"In response to this call, we announce several currently-underway (online) fieldwork/data elicitation efforts for Indo-Aryan languages that are both endangered and data-scarce.",
"These include Kholosi, Poguli, Kishtwari, Bhaderwahi, Torwali, and certain divergent dialects of Maldivian (e.g. Huvadhoo).",
"By virtue of their geographical spread (Northern India/Pakistan, Iran, Maldives), linguistic data collected from these languages will further enable the consturction of typologically viable datasets for both NLP and computational historical linguistic tasks.",
"In this paper, we gave an overview of the state of NLP in South Asia with a special focus on historicalcomparative linguistics, a research programme of which we believe will help address the issue of data scatteredness.",
"South Asian languages are not obliged to remain low-resource (in the NLP sense), and have plenty of speakers who would like access to and would benefit from language technologies, along with a multitude of raw linguistic resources that can be used to cultivate them.",
"Incentives have not been in place to support those demands, however, so we suggest an alternative route founded in linguistic research to gather data.",
"Collective efforts have had great success recently in NLPbesides institutional efforts like the Stanford Center for Research on Foundation Models (Bommasani et al., 2021) and HuggingFace's Big-Science Workshop, 12 there are grassroots organi-sations like MaskhaneNLP for African languages (Nekoto et al., 2020) and AI4Bharat (Kakwani et al., 2020) that are working towards improving resource availability.",
"Our proposals in this paper are the first seeds of a programme similar in spirit, motivated by a dual interest in understanding South Asian language history and remedying inequalities in technological availability."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective"
] |
[
"Model-based, reference-free evaluation metrics have been proposed as a fast and cost-effective approach to evaluate Natural Language Generation (NLG) systems.",
"Despite promising recent results, we find evidence that reference-free evaluation metrics of summarization and dialog generation may be relying on spurious correlations with measures such as word overlap, perplexity, and length.",
"We further observe that for text summarization, these metrics have high error rates when ranking current state-of-the-art abstractive summarization systems.",
"We demonstrate that these errors can be mitigated by explicitly designing evaluation metrics to avoid spurious features in reference-free evaluation.",
"Building reliable automated evaluation metrics is a key factor for quick development of better NLG systems.",
"Recent work has proposed reference-free evaluation metrics as a way to judge the quality of generated outputs without the need for human references (Celikyilmaz et al., 2020).",
"Many of these reference-free evaluations achieve remarkably high correlations with human evaluations, raising hopes that they may soon become a viable alternative to expensive human evaluations (Kryscinski et al., 2020; Goyal and Durrett, 2020; Sinha et al., 2020; Phy et al., 2020; Gao et al., 2020).",
"However, simply looking at correlation with human scores may not be sufficient to determine the efficacy and robustness of an evaluation metric.",
"In our work, we study recently proposed reference-free evaluation metrics of text summarization and dialog generation.",
"We find that it is possible to achieve similar levels of correlation with human judgment, using simple spurious correlates such as word overlap, length, and perplexity.",
"Furthermore, we find that the learned metrics have a relaEqual contribution.",
"tively high correlation with the spurious correlates as compared to human scores, which suggests that these metrics may rely heavily on spurious correlations.",
"This may be a potential explanation for the robustness issues that are observed in recent work, despite the seemingly high reported correlations with human judgements (Gabriel et al., 2021; Yeh et al., 2021).",
"We further analyze reference-free faithfulness evaluation metrics and show that the reliance on spurious correlations leads to errors in model selection and development.",
"First, we show that word overlap, a spurious correlate for the task, does as well as recently proposed reference-free metrics at system-level ranking.",
"Then, we look at rankings amongst systems that are relatively abstractive and faithful, i.e., the current state of the art, and find that these learned metrics perform significantly worse for these systems.",
"This is because word-overlap is not a good measure for ranking these systems in terms of their faithfulness since all of these systems have similarly low word overlap.",
"This suggests that we need metrics that are not overly reliant on word overlap in their faithfulness prediction.",
"Finally, we explore whether a simple mitigation strategy of adversarially training a faithfulness evaluation metric to avoid spurious correlates can lead to a more robust metric.",
"We find that our adversarially trained metric performs well at overall pairwise ranking while having a significantly lower correlation with the spurious correlate of word-overlap.",
"Crucially, we show that our proposed metric has improved performance in ranking between abstractive and faithful systems, which is a failure mode for existing reference-free faithfulness evaluation metrics.",
"level evaluation of these metrics.",
"We define a reference-free evaluation metric as a function F ( x, y ) that can assign a quality score to an output sequence y for a given input sequence x .",
"The goal of a reference-free evaluation metric F ( x, y ) is to assign high scores to desirable outputs y for some attribute, such as the faithfulness of a summary.",
"Measuring the quality of this metric is challenging, and prior work has relied upon correlation to human judgments H ( x, y ) .",
"Example-level evaluation: A number of existing reference free evaluations rely upon a procedure which we call example-level human correlations (Fabbri et al., 2020; Phy et al., 2020; Sinha et al., 2020), which measures the effectiveness of a metric by computing a Pearson or Spearman correlation corr p eval ( H ( x, y ) , F ( x, y )) over some sampled evaluation data p eval ( x, y ) .",
"System-level evaluation: An alternative approach to evaluation is systems-level rankings (Mathur et al., 2020; Kocmi et al., 2021), which we define as the ability to identify which model is better amongst a set of models M .",
"F is evaluated via its accuracy in matching human evaluation H on all pairs ( m i , m j ) M M where m i = m j .",
"The definitions of example and system level correlations suggest that evaluations of these metrics may have a strong dependence on the example and systems distributions p eval ( x, y ) and M .",
"As an example, consider an evaluation for dialogue response quality.",
"Building a truly accurate predictor for dialogue response quality is challenging, but if p eval ( x, y ) consists of all either professionally written examples or ungrammatical nonsense, a simple grammar checker would perform exceedingly well.",
"This is an instance of what is called a spurious correlate.",
"More formally, we define this as some attribute S ( x, y ) which is correlated with H in p eval ( x, y ) but is not correlated with H for a carefully constructed test distribution p test ( x, y ) .",
"We say that F is spuriously correlated with S if:",
"1. F and H are highly correlated under p eval ( x, y ) but not under p test ( x, y ) .",
"2. F remains correlated with S under p test ( x, y ) .",
"3 Example-level Analysis of Learned Evaluation Metrics In this section, we look at example-level Spearman correlations with human judgements for reference-free evaluation metrics that have been proposed for summarization and dialog generation.",
"We compare the metrics to spurious correlates such as word-overlap, length and perplexity, in order to understand whether the metrics can perform better than these simple measures.",
"We also measure to what extent the proposed metrics are correlated with these spurious measures.",
"State-of-the-art text summarization models are capable of producing fluent summaries.",
"However, they suffer from generating information that is not consistent (i.e., unfaithful) with the information in the source article (Cao et al., 2018).",
"Prior work showed that reference-based metrics are not able to capture such consistency errors (Falke et al., 2019).",
"This motivated researchers to build evaluation metrics to capture these faithfulness issues since collecting human evaluations for faithfulness is expensive and time-consuming (Wang et al., 2020; Durmus et al., 2020; Kryscinski et al., 2020; Goyal and Durrett, 2020).",
"In this section, we analyze recently proposed reference-free faithfulness evaluation metrics and compare their performance against the spurious correlate of word overlap.",
"Furthermore, we analyze the correlation between the learned metrics and word overlap to understand to what extent these metrics rely on spurious correlations.",
"We focus on learned entailment-based faithfulness evaluation metrics due to their high performance in identifying faithfulness issues (Pagnoni et al., 2021).",
"In particular we evaluate FactCC (Kryscinski et al., 2020) and DAE (Goyal and Durrett, 2021), which have been shown to achieve higher example-level correlations with human judgements than existing faithfulness evaluation metrics (Pagnoni et al., 2021).",
"FactCC.",
"Kryscinski et al. (2020) proposed an entailment-based method where they train a BERT-based model to predict whether or not the source article entails a summary.",
"To train this model, they generate synthetic training data by applying a set of transformations to source article sentences in order to get article, summary pairs.",
"They evaluate their approach on the CNN/DM dataset (See et al., 2017) and report a high accuracy on example-level comparisons on a human-annotated test set.",
"DAE.",
"Goyal and Durrett (2021) collected human annotations at the word-level and arc-level to study faithfulness at a finer granularity.",
"They also trained 1444 Coverage Density FactCC DAE 0.0 0.1 0.2 0.3 0.4 H u m a n C o rr e l a t i o n Spurious Correlates Learned Metrics Figure 1: Correlation of the spurious correlates and learned metrics with human scores.",
"a dependency arc entailment model for faithfulness detection (Goyal and Durrett, 2020).",
"They evaluate on the same test set as Kryscinski et al. (2020) and report improved results over FactCC.",
"We look at how these learned, reference-free metrics compare with word overlap a simple spurious correlate.",
"One simple measure of whether a generated summary is faithful is to look at its word overlap with the source article; summaries with a higher word overlap are more likely to be faithful (Ladhak et al., 2021).",
"However, this measure of faithfulness is spurious because it cannot distinguish between faithful and unfaithful summaries that have similar word overlap.",
"In particular, we look at two metrics of word-overlap following Grusky et al. (2018): coverage and density .",
"Coverage measures the percentage of the words in the summary that are also present in the article.",
"Density instead looks at the average length of the segments in the summary that are extracted from the article.",
"Results.",
"We use the large-scale faithfulness human annotations collected by Fabbri et al. (2020) for 16 summarization models on the CNN/DM dataset (See et al., 2017) for our analysis.",
"Figure 1 shows the example-level correlations with human scores for each of the factuality metrics as well as the spurious correlates.",
"We note that density has a similar correlation with human scores as DAE, and is significanlty 1 better than FactCC.",
"This result is alarming because density is a spurious correlate, yet it can achieve similar performance as the metrics that have been trained for faithfulness evaluation.",
"Moreover, we also see that both FactCC and DAE have a significantly higher correlation with density than they do with human scores (Table 1).",
"This indicates that these metrics may rely upon spurious correlations and are not yet capturing a deeper understanding of faithfulness.",
"Dialog generation systems need to be able to generate a response given the dialog context.",
"The ability to automatically evaluate the quality of a response is essential for building dialogue systems.",
"Liu et al. (2016) show that referenced-based evaluation metrics do not correlate well with human judgments of response quality.",
"This has led to an increased interest in reference-free evaluation metrics for evaluating dialogue response quality.",
"Similar to our analysis in 3.1, we aim to look at recently proposed metrics for reference-free evaluation, along with spurious correlates for dialog response quality, and compare them against human judgments.",
"DialogRPT.",
"Gao et al. (2020) finetune GPT-2 to predict the different types of human feedback (replies, upvotes, etc.) in Reddit threads and combine these to form a composite score for response quality.",
"They evaluate their approach on the Reddit data that they collected and show that their method achieves higher example-level agreement with human judgments than baseline metrics.",
"MAUDE.",
"Sinha et al. (2020) propose a model that encodes each utterance in the dialog context using a pre-trained BERT model and leverages the temporal transitions between them to score a response.",
"They add noise to existing dialog responses to create negative examples and train their system to distinguish them from valid responses using noise contrastive estimation (NCE).",
"They evaluate their model on the PersonaChat (Zhang et al., 2018) dataset and report improved example-level Spearman correlation with human judgments compared to existing baseline metrics.",
"USL-H.",
"Phy et al. (2020) decompose response quality into three aspects and train a model to score a response along each of these aspects.",
"They then combine the scores hierarchically into one composite score for response quality.",
"They evaluate their metric on the DailyDialog (Li et al., 2017) dataset and report significantly higher example-level correlations than previous baseline metrics.",
"MNLI+Adv.",
"Dziri et al. (2021) introduce an entailment-based metric that evaluates the groundedness of a dialog response, i.e., whether the generated response is consistent with the information in the provided external context, such as a Wikipedia article.",
"They trained their metric on automatically generated adversarial data by applying perturbations to the evidence.",
"They further collect human annotations for the various aspects of dialog generation, such as entailment, genericness, etc., and show that their method is more effective in accurately categorizing the generations than existing entailment models.",
"To assess these metrics, we look at two spurious correlates for dialog quality perplexity and length of the generated output as well as a simple combination of two measures.",
"We compute perplexity using a pre-trained GPT-2 language model (Rad-ford et al., 2019).",
"Perplexity (PPL) and length are spurious correlates since they do not account for the dialog context, and therefore it is possible to have high-quality and low-quality responses with similar perplexities/lengths.",
"For groundedness evaluation, we look at the same word overlap measures, as we did for summarization, i.e., density and coverage , and we measure overlap between the response and the provided external evidence.",
"Results.",
"We evaluate metrics 2 for response quality estimation on three popular multi-turn dialog datasets DailyDialog, which contains dialogs 2 We use the code provided by Yeh et al. (2021) for these experiments.",
"about everyday topics (Li et al., 2017), TopicalChat, which contains dialogs conditioned on a set of 8 broad topics (Gopalakrishnan et al., 2019), and PersonaChat, which contains dialogs conditioned on personas (Zhang et al., 2018).",
"To evaluate the recently proposed metric for response groundedness, we use human annotations collected by Dziri et al. (2021) on Wizard of Wikipedia (Dinan et al., 2019), a dataset that consists of dialogues conditioned on information from Wikipedia articles.",
"In particular, we use their entailment annotations, where human annotators judge whether or not the external evidence entails a generated response.",
"Figure 2 shows the correlations with the human scores and the spurious correlates for the dialog generation evaluation metrics.",
"In DialyDialog, we find that perplexity achieves a similar correlation with human judgments as USL-H.",
"In TopicalChat, perplexity or length alone does not beat out any of the learned metrics; however, combining the two measures achieves a significantly better correlation with humans than learned metrics.",
"In PersonaChat, USL-H achieves the highest correlation with human judgment, though the combined PPL+Len score is close.",
"We observe that USL-H is more consistent than the other reference-free metrics and achieves significantly higher correlations with human scores than MAUDE and DialogRPT for PersonaChat and TopicalChat.",
"We further find that the reference-free metrics have a higher correlation with the spurious correlates than the human scores (Table 2), which again suggests that these learned metrics may be relying upon spurious correlations.",
"For groundedness evaluation 3 , both coverage and density achieve significantly higher correlation with human scores than MNLI+Ad and USL-H.",
"Furthermore, MNLI+Ad and USL-H get a higher correlation with these spurious correlates than human scores (Figure 3).",
"Despite relatively high correlations on their original datasets, these metrics seem to perform similarly to simple spurious correlations on other datasets.",
"In order to better understand the effectiveness of these reference-free evaluation metrics, we suggest that future research includes comparisons to potential spurious correlates and that research communities come up with a set of potential standard spurious correlates.",
"Our example-level analysis demonstrates that recently proposed learned evaluation metrics achieve worse correlations with human scores than spurious correlates for almost all the settings.",
"Since an important goal of building these metrics is to be able to rank arbitrary systems, we analyze whether these concerns we observe at the example level manifest into harms at the system level (i.e., ranking systems incorrectly).",
"In order to study this, we need a large collection of human evaluation data across a wide range of systems.",
"Fabbri et al. (2020) have recently released human evaluations for faithfulness across 16 summarization systems on CNN/DM.",
"Therefore, we focus on system-level rankings of faithfulness for the remainder of the paper.",
"We first measure pairwise ranking accuracy for all the systems shown in Figure 4.",
"4 We find that system-level rankings suffer from a similar issue as the example level correlations: density and cover-3 We do not include MAUDE and DialogRPT results for this task since they perform significantly worse.",
"age appear as spurious correlations (Table 4).",
"From this observation, we perform a finer-grained analysis and show that these factuality metrics fail on the most important subset of model comparisons: abstractive but faithful summarization system (AF) where the current state-of-the-art abstractive summarization systems fall.",
"Both faithfulness metrics perform relatively well when we look at pairwise ranking accuracy across all pairs of models (Table 4).",
"However, they are unable to improve over density , which achieves the highest overall accuracy.",
"When we look at ranking within the abstractive faithful group, we see density is no longer a good measure for the faithfulness of a system since these systems are relatively close in terms of density.",
"Similarly, the performance of the learned metrics drops significantly, which is an expected result since our analysis in 3.1 showed that both FactCC and DAE are spuriously correlated with density.",
"We claim that our system-level analysis is further evidence that these metrics may be relying heavily on simple spurious measures such as word overlap.",
"These results highlight the importance of performing analyses across different distributions of systems.",
"If we were looking at just the overall ranking accuracy of the metrics, we would conclude that DAE and FactCC correctly measure faithfulness.",
"However, on closer examination, we see that both metrics perform relatively poorly in ranking AF systems, which is arguably the most crucial group since most state-of-the-art systems operate in this regime, and there is substantial interest in building abstractive and faithful summarization systems.",
"In our earlier example-level analysis, we found that learned metrics have higher correlation with spurious correlates than human judgment.",
"We further saw in our system-level analysis that learned metrics for faithfulness are unable to outperform density.",
"One natural question that follows is whether we can build metrics that do well at the systems level by learning representations that rely less on spurious correlates.",
"In order to do this, we train an entailment based model using the synthetically generated data from FactCC in an adversarial setup similar to Ganin et al. (2016).",
"In particular, our approach augments the standard faithfulness predictor with a density predictor that tries to predict the density of the summary from the model's internal representation.",
"We use this density predictor as an adversary, and our goal is to predict faithfulness while ensuring that it is difficult to predict density using this same representation.",
"To achieve this, the gradients from the density predictor are reversed, which makes it harder to predict the density from the encoder's representation, and thus makes the faithfulness predictions less reliant on density.",
"The model architecture is shown in Figure 5.",
"We initialize the parameter to 0 and gradually increase it to 1 , following the schedule detailed in Ganin et al. (2016).",
"We fine-tune a pre-trained Electra model (Clark et al., 2020) using the transformers library (Wolf et al., 2020) for this task.",
"We chose Electra in order to match the model architecture in DAE.",
"Since the original FactCC metric was fine-tuned on BERT, we also fine-tune our own version of FactCC on Electra (FactCC-Electra) as an ablation.",
"Our adversarially trained model is essentially the same as FactCC-Electra, but with an additional adversarial head for predicting density.",
"Results.",
"We note that the FactCC-Electra model performs worse than the original FactCC, which is consistent with the findings in Goyal and Durrett (2021).",
"Our adversarially trained metric has a significantly lower example-level correlation with density (27.71%), as compared to FactCC (59.10%) 1449 and DAE (76.37%).",
"We find that the adversarial model 5 can achieve a significantly better performance than existing learned evaluation metrics in ranking systems within the abstractive faithful (AF) group (Table 5).",
"This suggests that it is possible to learn effective metrics that are not overly reliant on spurious correlates.",
"Furthermore, our metric is also effective in overall pairwise ranking of the systems achieving 85 .",
"27% accuracy.",
"Most existing work on assessing the evaluation methodology of evaluation metrics has focused on reference-based evaluation.",
"For example, Mathur et al. (2020) take a critical look at the use of example-level correlations to measure reference-based evaluation metrics in Machine Translation.",
"They show that evaluating these metrics using example-level correlations can be sensitive to the presence of outliers which can lead to false conclusions about a metric's efficacy.",
"Furthermore, Kocmi et al. (2021) show that proper assessment of evaluation metrics is crucial as uninformed use of automated metrics such as BLEU can lead to bad deployment decisions.",
"Caglayan et al. (2020) has shown that automated reference-based evaluation metrics have robustness issues which can cause them to score generated outputs higher than human written outputs.",
"Furthermore, Bhandari et al. (2020) has studied the limitations of reference-based evaluation metrics of text summarization, comparing these metrics across different datasets and application scenarios.",
"In contrast, our work focuses on analyzing learned, reference-free evaluation metrics in summarization and dialog generation, accounting for potential spurious correlates for these evaluation tasks.",
"There has been some recent work comparing existing reference-free evaluation metrics for text summarization and dialog generation.",
"Pagnoni et al. (2021) has measured the efficacy of existing reference-free faithfulness evaluation metrics of summarization on two different summarization datasets relying on example-level correlations.",
"Similarly, Gehrmann et al. (2021) has evaluated automated metrics of text summarization across a wide range of datasets.",
"Gabriel et al. (2021) has proposed a meta-evaluation framework to evaluate the evaluation metrics looking at certain aspects of 5 Our adversarially trained model can be found at https://github.com/esdurmus/adversarial_eval.",
"these metrics such as robustness, sensitivity, high correlation with human scores, etc., and measured existing evaluation metrics across these aspects.",
"Yeh et al. (2021) perform a comprehensive study of existing dialog generation metrics across several different datasets and find that the performance of metrics varies widely across datasets.",
"Gabriel et al. (2021) and Yeh et al. (2021) are the most related to our work since they study robustness of these metrics looking at their performance across different datasets.",
"In our work, however, we explicitly study spurious correlations and show that these may potentially be contributing to the robustness issues.",
"We further present initial promising results suggesting that controlling for these spurious correlates may result in more robust evaluation metrics.",
"In conclusion, we study reference-free evaluation metrics for summarization and dialog generation and show that simply looking at overall example-level correlation with human judgment paints an incomplete picture of the effectiveness of a metric.",
"In particular, we show that these metrics are unable to do better than simple spurious correlates for the task.",
"We see that this trend carries over in system-level ranking for summarization systems, where a spurious correlate for the task performs as well as existing learned evaluation metrics.",
"We find that despite the relatively high overall system-level ranking performance, the learned metrics are not robust to distribution shifts.",
"We show that they fail to properly rank abstractive and (relatively) faithful systems, which is where the current state of the art operates.",
"Finally, we train a faithfulness metric that scores the faithfulness of a summary without relying on the spurious overlap correlate.",
"We show that our metric is more robust across distribution shifts and does better at ranking abstractive, faithful summarization systems.",
"We suggest that future work in designing reference-free evaluation metrics should be mindful of the distribution of the evaluation data.",
"In particular, metrics should be assessed across different distributions of systems in order to test for robustness and failure modes.",
"Simple spurious correlates can be used as a tool to indicate potential overestimates of the effectiveness of proposed metrics.",
"Finally, we highlight the importance of collecting large-scale human evaluation datasets across a wide 1450 range of systems, similar to Fabbri et al. (2020), to enable more comprehensive analyses of evaluation metrics.",
"ED is supported by SAIL Postdoc Fellowship.",
"We further thank the anonymous reviewers and the Stanford NLP group for their invaluable feedback.",
"References Manik Bhandari, Pranav Narayan Gour, Atabak Ash-faq, Pengfei Liu, and Graham Neubig.",
"2020.",
"Reevaluating evaluation in text summarization.",
"In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 93479359, Online.",
"Association for Computational Linguistics.",
"Ozan Caglayan, Pranava Madhyastha, and Lucia Specia.",
"2020.",
"Curious case of language generation evaluation metrics: A cautionary tale.",
"In Proceedings of the 28th International Conference on Computational Linguistics , pages 23222328, Barcelona, Spain (On-line).",
"International Committee on Computational Linguistics.",
"Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li.",
"2018.",
"Faithful to the original: Fact aware neural abstractive summarization.",
"In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018 , pages 47844791.",
"AAAI Press.",
"Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao.",
"2020.",
"Evaluation of text generation: A survey.",
"CoRR , abs/2006.14799.",
"Yen-Chun Chen and Mohit Bansal.",
"2018.",
"Fast abstractive summarization with reinforce-selected sentence rewriting.",
"In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol-ume 1: Long Papers) , pages 675686, Melbourne, Australia.",
"Association for Computational Linguistics.",
"Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning.",
"2020.",
"ELECTRA: Pretraining text encoders as discriminators rather than generators.",
"In ICLR .",
"Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston.",
"2019.",
"Wizard of wikipedia: Knowledge-powered conversational agents.",
"Yue Dong, Yikang Shen, Eric Crawford, Herke van Hoof, and Jackie Chi Kit Cheung.",
"2018.",
"Bandit-Sum: Extractive summarization as a contextual bandit.",
"In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 37393748, Brussels, Belgium.",
"Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Anuoluwapo Aremu, Antoine Bosselut, Khy-athi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, Wanyu Du, Esin Durmus, Ondrej Duek, Chris Chinenye Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jham-tani, Yangfeng Ji, Shailza Jolly, Mihir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van 1451 Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Andre Niyongabo Rubungo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, Joo Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimo-rina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, and Jiawei Zhou.",
"Association for Computational Linguistics.",
"Nouha Dziri, Hannah Rashkin, Tal Linzen, and David Reitter.",
"2021.",
"Evaluating groundedness in dialogue systems: The BEGIN benchmark.",
"CoRR , abs/2105.00071.",
"Alexander R Fabbri, Wojciech Kryscinski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev.",
"2020.",
"Summeval: Re-evaluating summarization evaluation.",
"arXiv preprint arXiv:2007.12626 .",
"Saadia Gabriel, Asli Celikyilmaz, Rahul Jha, Yejin Choi, and Jianfeng Gao.",
"2021.",
"GO FIGURE: A meta evaluation of factuality in summarization.",
"In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 , pages 478487, Online.",
"Association for Computational Linguistics.",
"Esin Durmus, He He, and Mona Diab.",
"2020.",
"FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization.",
"In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 5055 5070, Online.",
"Association for Computational Linguistics.",
"Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych.",
"2019.",
"Ranking generated summaries by correctness: An interesting but challenging application for natural language inference.",
"In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 22142220, Florence, Italy.",
"Association for Computational Linguistics.",
"Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franois Lavi-olette, Mario Marchand, and Victor Lempitsky.",
"2016.",
"Domain-adversarial training of neural networks.",
"The journal of machine learning research , 17(1):2096 2030.",
"Xiang Gao, Yizhe Zhang, Michel Galley, Chris Brock-ett, and Bill Dolan.",
"2020.",
"Dialogue response ranking training with large-scale human feedback data.",
"In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 386395, Online.",
"Association for Computational Linguistics.",
"2021.",
"The GEM benchmark: Natural language generation, its evaluation and metrics.",
"In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021) , pages 96120, Online.",
"Association for Computational Linguistics.",
"Sebastian Gehrmann, Yuntian Deng, and Alexander Rush.",
"2018.",
"Bottom-up abstractive summarization.",
"In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 40984109, Brussels, Belgium.",
"Association for Computational Linguistics.",
"Tanya Goyal and Greg Durrett.",
"2020.",
"Evaluating factuality in generation with dependency-level entailment.",
"In Findings of the Association for Computational Linguistics: EMNLP 2020 , pages 35923603, Online.",
"Association for Computational Linguistics.",
"Tanya Goyal and Greg Durrett.",
"2021.",
"Annotating and modeling fine-grained factuality in summarization.",
"In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 14491462, Online.",
"Association for Computational Linguistics.",
"Max Grusky, Mor Naaman, and Yoav Artzi.",
"2018.",
"Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies.",
"In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) , pages 708719, New Orleans, Louisiana.",
"Association for Computational Linguistics.",
"Han Guo, Ramakanth Pasunuru, and Mohit Bansal.",
"2018.",
"Soft layer-specific multi-task summarization with entailment and question generation.",
"In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 687697, Melbourne, Australia.",
"Association for Computational Linguistics.",
"Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun.",
"2018.",
"A unified model for extractive and abstractive summarization using inconsistency loss.",
"In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 132141, Melbourne, Australia.",
"Karthik Gopalakrishnan, Behnam Hedayatnia, Qin-lang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tr.",
"2019.",
"Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations.",
"In Proc.",
"Interspeech 2019 , pages 18911895.",
"Association for Computational Linguistics.",
"Yichen Jiang and Mohit Bansal.",
"2018.",
"Closed-book training to improve summarization encoder memory.",
"In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 40674077, Brussels, Belgium.",
"Association for Computational Linguistics.",
"Tom Kocmi, Christian Federmann, Roman Grund-kiewicz, Marcin Junczys-Dowmunt, Hitokazu Mat-sushita, and Arul Menezes.",
"2021.",
"To ship or not to ship: An extensive evaluation of automatic metrics for machine translation.",
"CoRR , abs/2107.10821.",
"Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher.",
"2020.",
"Evaluating the factual consistency of abstractive text summarization.",
"In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 93329346, Online.",
"Association for Computational Linguistics.",
"Wojciech Kryscinski, Romain Paulus, Caiming Xiong, and Richard Socher.",
"2018.",
"Improving abstraction in text summarization.",
"In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 18081817, Brussels, Belgium.",
"Association for Computational Linguistics.",
"Faisal Ladhak, Esin Durmus, He He, Claire Cardie, and Kathleen R. McKeown.",
"2021.",
"Faithful or extractive?",
"on mitigating the faithfulness-abstractiveness trade-off in abstractive summarization.",
"CoRR , abs/2108.13684.",
"Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer.",
"2020.",
"BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.",
"In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 78717880, Online.",
"Association for Computational Linguistics.",
"Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu.",
"2017.",
"DailyDialog: A manually labelled multi-turn dialogue dataset.",
"In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 986995, Taipei, Taiwan.",
"Asian Federation of Natural Language Processing.",
"Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose-worthy, Laurent Charlin, and Joelle Pineau.",
"2016.",
"How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation.",
"In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing , pages 21222132, Austin, Texas.",
"Association for Computational Linguistics.",
"Nitika Mathur, Timothy Baldwin, and Trevor Cohn.",
"2020.",
"Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation metrics.",
"In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 49844997, Online.",
"Association for Computational Linguistics.",
"Artidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov.",
"2021.",
"Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics.",
"In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 48124829, Online.",
"Association for Computational Linguistics.",
"Ramakanth Pasunuru and Mohit Bansal.",
"2018.",
"Multi-reward reinforced summarization with saliency and entailment.",
"In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) , pages 646653, New Orleans, Louisiana.",
"Association for Computational Linguistics.",
"Vitou Phy, Yang Zhao, and Akiko Aizawa.",
"2020.",
"Decon-struct to reconstruct a configurable evaluation metric for open-domain dialogue systems.",
"In Proceedings of the 28th International Conference on Computational Linguistics , pages 41644178, Barcelona, Spain (On-line).",
"International Committee on Computational Linguistics.",
"Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.",
"2019.",
"Language models are unsupervised multitask learners.",
"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu.",
"2019.",
"Exploring the limits of transfer learning with a unified text-to-text transformer.",
"CoRR , abs/1910.10683.",
"Abigail See, Peter J. Liu, and Christopher D. Manning.",
"2017.",
"Get to the point: Summarization with pointer-generator networks.",
"In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1073 1083, Vancouver, Canada.",
"Association for Computational Linguistics.",
"Koustuv Sinha, Prasanna Parthasarathi, Jasmine Wang, Ryan Lowe, William L. Hamilton, and Joelle Pineau.",
"2020.",
"Learning an unreferenced metric for online dialogue evaluation.",
"In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 24302441, Online.",
"Association for Computational Linguistics.",
"Alex Wang, Kyunghyun Cho, and Mike Lewis.",
"2020.",
"Asking and answering questions to evaluate the factual consistency of summaries.",
"In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 50085020, Online.",
"Association for Computational Linguistics.",
"Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, Rmi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush.",
"2020.",
"Transformers: State-of-the-art natural language processing.",
"In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations , pages 3845, Online.",
"Association for Computational Linguistics.",
"Yuxiang Wu and Baotian Hu.",
"2018.",
"Learning to extract coherent summary via deep reinforcement learning.",
"In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018 , pages 56025609.",
"AAAI Press.",
"Yi-Ting Yeh, Maxine Eskenazi, and Shikib Mehri.",
"2021.",
"A comprehensive assessment of dialog evaluation metrics.",
"In The First Workshop on Evaluations and Assessments of Neural Conversation Systems , pages 1533, Online.",
"Association for Computational Linguistics.",
"Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston.",
"2018.",
"Personalizing dialogue agents: I have a dog, do you have pets too?",
"In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 22042213, Melbourne, Australia.",
"Association for Computational Linguistics.",
"Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao.",
"2018.",
"Neural document summarization by jointly learning to score and select sentences.",
"In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 654663, Melbourne, Australia.",
"Association for Computational Linguistics.",
"Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving.",
"2019.",
"Fine-tuning language models from human preferences.",
"arXiv preprint arXiv:1909.08593 ."
] | [
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"result",
"result",
"result",
"result",
"result",
"method",
"result",
"method",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Document-level neural machine translation (DocNMT) achieves coherent translations by incorporating cross-sentence context.",
"However, for most language pairs there's a shortage of parallel documents, although parallel sentences are readily available.",
"In this paper, we study whether and how contextual modeling in DocNMT is transferable via multilingual modeling.",
"We focus on the scenario of zero-shot transfer from teacher languages with document level data to student languages with no documents but sentence level data, and for the first time treat document-level translation as a transfer learning problem.",
"Using simple concatenation-based DocNMT, we explore the effect of 3 factors on the transfer: the number of teacher languages with document level data, the balance between document and sentence level data at training, and the data condition of parallel documents (genuine vs. back-translated).",
"Our experiments on Europarl-7 and IWSLT-10 show the feasibility of multilingual transfer for DocNMT, particularly on document-specific metrics.",
"We observe that more teacher languages and adequate data balance both contribute to better transfer quality.",
"Surprisingly, the transfer is less sensitive to the data condition, where multilingual DocNMT delivers decent performance with either back-translated or genuine document pairs.",
"Recent years have witnessed a trend moving from sentence-level neural machine translation (Sen-NMT) to its document-level counterpart (Doc-NMT).",
"SenNMT inevitably suffers from translation errors related with document phenomena (Maruf et al., 2021) and delivers obviously inferior performance when compared against human translations and evaluated at a document level (Lubli et al., Work done while Biao Zhang was interning at Google Research.",
"2018).",
"Most efforts on DocNMT aim at improving contextual modeling via dedicated model architectures and/or decoding algorithms (Bawden et al., 2018; Voita et al., 2019; Chen et al., 2020) and heavily rely on large-scale parallel document resources.",
"Nevertheless, document resources are unevenly distributed across language pairs, with most pairs having little to no such resources.",
"1 One promising way to accommodate languages with varied training data is multilingual modeling, as demonstrated in multilingual SenNMT (Firat et al., 2016; Johnson et al., 2017).",
"By sharing parameters across languages, multilingual modeling encourages cross-lingual knowledge transfer, enabling performance improvement and even zero-shot transfer (Aharoni et al., 2019; Arivazhagan et al., 2019b; Zhang et al., 2020, 2021).",
"In the context of translation, however, most studies on multilingual transfer center around SenNMT, seldom going beyond sentence-level translation.",
"So far, the question of whether and how document-level contextual modeling can be learned cross-lingually in multilingual DocNMT is still unanswered.",
"In this paper, we study zero-shot generalization for DocNMT the ability to attain plausible Doc-1 Note that we use language and language pair interchangeably since one side of our parallel data is always English.",
"NMT quality for some focused ( student ) language pair(s), with only parallel sentences for the student but parallel documents for other ( teacher ) languages in the multilingual mix.",
"The high-level research question we seek to answer is illustrated in Figure 1.",
"We resort to transfer learning via multilinguality to leverage document resources in teacher languages to help the student languages.",
"We perform our analysis using a simple concatenation based DocNMT, where consecutive sentences are chained into one sequence for translation.",
"We investigate three dimensions extensively to understand the transfer in multilingual DocNMT:",
"1) the number of languages with document level data (teacher languages), where we simplify our transfer setup to contain either only one teacher language (with N students) or N teachers (with one student);",
"2) the data balance for parallel documents, i.e. manipulating the ratio of document-level data to sentence-level data during training; and",
"3) the data condition of parallel documents, where we adopt back-translated parallel documents when only monolingual documents are given in teacher languages or use genuine parallel documents crawled natively.",
"We conduct experiments on two publicly available datasets, namely Europarl-7 and IWSLT-10, covering 6 and 9 languages from/to English respectively.",
"We analyze one-to-many (En Xx) and many-to-one (Xx En) translation scenarios separately.",
"Following recent work (Ma et al., 2021), we adopt document-specific metrics for evaluation apart from BLEU and support our findings with human evaluations.",
"We also propose a pronoun F1 metric (targeted at gendered pronouns: he/she) for Xx En translation, and employ accuracy on contrastive test sets (Bawden et al., 2018; Mller et al., 2018) for En Xx translation.",
"Our main findings are summarized below: Zero-shot transfer from sentences to documents is feasible through multilingual DocNMT modeling, particularly when evaluated with document-specific metrics.",
"This is partially supported by human evaluation.",
"Transfer quality is strongly affected by the number of teacher languages that use document level data and the data balance for documents.",
"Higher quality is achieved with more teacher languages and adequate document schedule, where the optimal balance varies across scenarios.",
"Surprisingly, transfer via back-translated documents performs comparable to transfer via genuine parallel documents.",
"Zero-shot transfer from high-resource document level languages and to low-resource sentence level ones is relatively easier, resulting in better transfer results.",
"Document-level MT Integrating document-level information meaningfully into NMT is a challenging task, which has inspired research not only on exploring advanced context-aware neural architectures, including simple concatenation-based models (Tiedemann and Scherrer, 2017; Junczys-Dowmunt, 2019; Lopes et al., 2020), multi-source models (Jean et al., 2017; Bawden et al., 2018; Zhang et al., 2018), hierarchical models (Miculi-cich et al., 2018; Zheng et al., 2020; Chen et al., 2020), multi-pass models (Voita et al., 2019; Yu et al., 2020; Mansimov et al., 2021) and dynamic context models (Kang et al., 2020), to name a few.",
"But it has also motivated the field to revisit the common protocols resorted for evaluation (Freitag et al., 2021).",
"Despite the hard to measure success, all the above mentioned methods implicitly assume an abundance of document resources and overlook the data scarcity problem.",
"In this study, we adopt the simple concatenation model as our experimental protocol, and leave the exploration of various input formatting options and modelling to future work.",
"Considering the fast changing landscape of the (contextual) MT evaluation, we also provide multiple evaluation metrics including human evaluations, to give a full picture of the phenomena under investigation, while acknowledging the current imperfections of and disagreements on the right way of evaluating MT systems (Kocmi et al., 2021).",
"Zero-Shot Transfer via Multilinguality Multilingual modeling often clusters sentences of similar meaning from different languages within a shared semantic space (Kudugunta et al., 2019; Siddhant et al., 2020).",
"Such representation space is hypothesized to enable zero-shot transfer, delivering improved performance in many cross-lingual tasks (Eriguchi et al., 2018; Hu et al., 2020; Chi et al., 2021; Ruder et al., 2021), especially based on large-scale pretrained multilingual Transformers (Devlin et al., 2019; Conneau and Lample, 2019; Xue et al., 2021).",
"When it comes to transla-4177 tion, multilingual SenNMT successfully achieves zero-shot translation, transferring sentence-level generation knowledge to language pairs unseen during training (Firat et al., 2016; Johnson et al., 2017; Gu et al., 2019; Arivazhagan et al., 2019a) even in massively multilingual settings (Aharoni et al., 2019; Arivazhagan et al., 2019b; Zhang et al., 2020, 2021).",
"Our study extends multilingual SenNMT to multilingual DocNMT and aims at document-level knowledge transfer from languages that have document level data to languages that only have sentence level data.",
"To the best of our knowledge, our study is the first demonstrating the emergence of document-level zero-shot transfer across languages for multilingual machine translation.",
"We first formulate the zero-shot generalization framework explored in this paper.",
"Given N+1 language pairs, we assume that all of them have parallel sentences for training, but only some of them have parallel documents (teachers).",
"Through multilingual training, we study to what degree contextual modeling in document-supervised DocNMT can be transferred to those document-poor (student) languages as in Figure 1.",
"Any form of parallel document for student languages is disallowed at training, ensuring that the transfer is measured zero-shot.",
"We employ the concatenation-based method with a D 2 D structure for DocNMT, where D consecutive sentences in a document are concatenated into one sequence for translation (Junczys-Dowmunt, 2019; Sun et al., 2020).",
"Sentence boundary is indicated by a special symbol [SEN].",
"We adopt the language token method (Johnson et al., 2017) for multilingual DocNMT, using source and target language token for Xx En and En Xx translation respectively.",
"Instead of appending this token to the source sequence, we add its embedding to each source word embedding to strengthen the language signal in a document translation setting.",
"For training , we adopt a two-stage method: we first pretrain a multilingual SenNMT on sentence level data for all languages; then, we finetune it to obtain multilingual DocNMT on a mix of document level data from teacher languages and sentence level data from student languages.",
"Our analysis requires training a large number of DocNMT models, and the two-stage method saves substantial amounts of computation by sharing the pretrained SenNMT.",
"For evaluation , we distinguish sentence-level inference (SenInfer) from its document-level counterpart (DocInfer).",
"SenInfer translates sentences separately (out of context), while DocInfer translates D consecutive and non-overlapping sentences in context with each other.",
"2 3.2 Zero-Shot Setup We explore three factors for the zero-shot transfer: The number of teacher languages The source of the transfer comes from teacher languages.",
"Intuitively, both the number of teacher languages and their relevance to student language(s) affect the transfer result.",
"However, exhaustively exploring all possible teacher-student configurations in a multilingual setting will lead to a large search space that expands exponentially with respect to the total number of languages involved.",
"Instead, we simplify our study by exploring two extreme transfer settings, namely N21 and 12N transfer.",
"The first setting uses N teachers that incorporate document level data with 1 student having sentence level data only, while the second setting has 1 teacher and N students.",
"Note that in either N21 or 12N transfer, there exist N teacher-student configurations, and we report average results over them.",
"3 The data balance for parallel documents When varying the number of teacher languages, the proportion of document data at training also changes.",
"Such imbalance could deeply affect transfer (Arivazhagan et al., 2019b).",
"To offset this effect, we include the data balance for analysis by controlling the sampling ratio p of documents from 0.1 to 0.9 with a step size of 0.1.",
"Note p is for documents in all teacher languages , and the relative proportion among teachers is always retained.",
"The data condition of parallel documents We also study when teacher languages have no parallel documents but only monolingual ones.",
"Methods utilizing monolingual documents for DocNMT vary greatly.",
"Following recent work (Sugiyama and Yoshinaga, 2 At decoding phase, the last chunk in a source document can have < D sentences for DocInfer.",
"2019; Huo et al., 2020; Ul Haq et al., 2020), we adopt back-translation (BT) to construct pseudo parallel documents.",
"Note that, for teacher languages, we replace all sentence level training data with pseudo documents rather than mixing them according to our empirical results in Appendix C. 4 Experimental Settings Datasets We conduct experiments on two public datasets: Europarl-7 and IWSLT-10.",
"Europarl-7 is extracted from European Parliament (v10) and has translations between English and N = 6 different languages, including Czech, German, Finnish, French, Lithuanian and Polish (Koehn, 2005).",
"This dataset offers sentence-aligned parallel documents (0.9K 3.7K documents, 190K 1.9M sentences) and also monolingual documents (9.7K 11K documents, 0.65M 2.28M sentences) for training.",
"For evaluation, we use the WMT dev and test sets (Bar-rault et al., 2020) available for each language pair (from 2013 to 2020).",
"In contrast, IWSLT-10 is collected from TED talks and covers translations between English and N = 9 different languages, including Arabic, German, French, Italian, Japanese, Korean, Dutch, Romanian and Chinese (Cettolo et al., 2017).",
"Unlike Europarl-7, the distribution of training data over languages in IWSLT-10 is much smoother (uniform).",
"There are 1.9K sentence-aligned parallel documents with 240K sentences for each language pair.",
"We further collected about 1K TED talks for each language pair (crawled from Feb 2018 to Jan 2021) as monolingual documents.",
"We use IWSLT17 dev and test sets for evaluation.",
"Detailed statistics are given in Appendix A. We preprocess all texts with the byte pair encoding (BPE) algorithm (Sennrich et al., 2016) implemented in the sentencepiece toolkit (Kudo and Richardson, 2018), and set the vocabulary size to 32K and 64K for IWSLT-10 and Europarl-7, respectively.",
"Model Details We use the Transformer-base model (Vaswani et al., 2017) for experiments with 6 encoder/decoder layers, 8 attention heads and a model dimension of 512/2048.",
"We set D = 5 for DocNMT.",
"We use Adam (Kingma and Ba, 2015) ( 1 = 0 . 9 , 2 = 0 . 98 ) for parameter update with a learning rate warmup step of 4K and label smoothing rate of 0.1.",
"We apply dropout to residual connections and attention weights with a rate of 0.5 and 0.2, respectively.",
"Other training and decoding details are given in Appendix B. Back-Translation Some of our models are trained using back-translated monolingual documents.",
"Back-translations are obtained using bilingual SenNMT (independently for Europarl-7 and IWSLT-10).",
"To train these models, we halve the BPE vocabulary size as well as the training steps.",
"All other settings are kept as mentioned above.",
"Evaluation Following previous work, we use BLEU (Post, 2018) 4 to measure the general translation quality.",
"Document-level BLEU is calculated by counting n-gram at the document level instead of at the individual sentence level (Sun et al., 2020).",
"Measuring improvements to document phenomena in translation automatically remains challenging and oftentimes simple surface-based metrics such as BLEU (Lubli et al., 2018) are not sensitive enough.",
"Therefore, we evaluate our model on test sets that focus on such document phenomena.",
"We use the contrastive test sets for En-De (Mller et al., 2018) and En-Fr (Bawden et al., 2018) which measure a model's ability to distinguish correct from incorrect anaphoric pronoun translations.",
"We include 4 and 1 additional context sentences for EnDe and En-Fr contrastive evaluation, respectively.",
"Gender bias in translation models has attracted much attention recently (Kuczmarski and Johnson, 2018; Saunders and Byrne, 2020).",
"We expect that contextual information can help to alleviate it.",
"To this end, we introduce gendered pronoun F1 based on the following precision and recall scores to evaluate English translations: Precision = (cid:80) i,g G min( C g r i , C g h i ) (cid:80) i,g G C g h i Recall = (cid:80) i,g G min( C g r i , C g h i ) (cid:80) i,g G C g r i , (1) where r i and h i denotes the i -th gold reference and hypothesis sentence respectively, comprising the gendered pronouns of interest G 5 .",
"C g x denotes the count of pronoun g in sentence x .",
"Finally, we conduct human evaluation to verify the performance delivered by zero-shot transfer.",
"We work on En-De, Europarl-7, where we sample 50 source documents from the test set, and translate them into the target language using the corresponding models and decoding techniques.",
"The translated documents are presented to bilingual human raters who are native in the non-English locale.",
"The raters are asked to evaluate translation qualities while taking the full source document context into account.",
"The raters assign a score in a 0-6 scale to every sentence-translation pair in the document, where 0 and 6 mean nonsense and perfect translations, respectively.",
"For each model, the scores are aggregated across the entire test corpus and the average scores are reported.",
"To ensure a fair diversity of ratings, each rater rates no more than 6 documents per model; an average of 18 raters evaluated each model independently.",
"Does SenNMT have the capability of leveraging context?",
"Not really!",
"We put our major analysis on Europarl-7 (N=6, all European languages).",
"Before diving deep into the transfer, we start with analyzing whether SenNMT models trained on sentences alone could generalize to contextual translation.",
"If multilingual SenNMT can be directly used for DocInfer, studying zero-shot transfer would be Model Xx En En Xx SenNMT w/ SenInfer 22.40 18.82 SenNMT w/ DocInfer ( D = 2 ) -2.98 -4.05 SenNMT w/ DocInfer ( D = 5 ) -11.7 -13.0 Table 1: Average BLEU on Europarl-7 for multilingual SenNMT with SenInfer and DocInfer.",
"meaningless.",
"Results in Table 1 challenge this possibility: SenNMT results in large quality reduction with DocInfer.",
"We observe that SenNMT produces significantly shorter translations under DocInfer, preferring to translate the first few input sentences.",
"We ascribe such failures to the poor generalization to documents from sentence-level training.",
"Impact of the data balance and the number of teacher languages on zero-shot transfer Figure 2 and 3 summarize the results for En Xx and Xx En translation, respectively, where we report the average performance paired with the stan-4180 0.1 0.3 0.5 0.7 0.9 Sen proportion p Doc 6 4 2 0 BLEU v s .",
"dard deviation over N configurations.",
"6 Overall, the document-level zero-shot transfer is achievable via multilingual modeling.",
"Transfer-based DocNMT could successfully identify and translate the correct number of input sentences for student languages.",
"With a proper sampling ratio for document-level data, student DocNMT yields better performance than its SenNMT counterpart, especially shown by document-specific evaluations (F1 and ACC).",
"Increasing teacher languages improves transfer.",
"In En Xx and Xx En translation, we find that N21 transfer performs consistently better than 12N transfer on all metrics.",
"This is reasonable since N21 transfer has N teacher languages, offering richer and more informative sources for transfer.",
"Balancing between document and sentence data matters for transfer.",
"We also observe that performance changes over the document proportion on 6 Note the average results are for transfer directions, not the supervised ones.",
"Each experiment in N21 transfer has only one transfer direction, so we directly report the average over N configurations; by contrast, in 12N transfer, we have N transfer directions, where we first perform average over these N transfer results followed by another average over N configurations.",
"Also note, the average results contains transfer from high/low and similar/distant languages.",
"all metrics in both 12N and N21 transfer.",
"Applying more or fewer documents during training often hurts zero-shot transfer, indicating a trade-off.",
"Roughly, setting p to 30% 50% delivers good performance (Figure 2 and 3), although the optimal proportion depends.",
"SenInfer underperforms DocInfer on document-specific metrics.",
"DocNMT w/ SenInfer performs similarly to SenNMT, and better than DocInfer on BLEU.",
"When evaluating document phenomena, however, SenInfer shows clear insufficiency.",
"This resonates with the findings of Ma et al. (2021).",
"monolingual documents?",
"Yes.",
"We next repeat our experiments with BT document pairs.",
"Figure 4 and 5 show that BT performs surprisingly well on document-level zero-shot transfer.",
"We observe almost the same performance pattern compared to training with genuine documents in all settings (En Xx and Xx En, N21 and 12N transfer and different metrics), although BLEU scores become worse and the optimal proportion also changes.",
"We argue that the target-side genuine context information in BT documents helps contextual model-4181 Xx En BLEU F1 High Low High Low High Low High Low DocNMT + 12N transfer -1.07 -1.79 -1.71 -1.15 +1.73 +0.95 +0.73 +1.95 w/ BT -1.19 -1.37 -1.59 -0.97 +1.61 +2.38 +1.36 +2.64 En Xx BLEU ACC En-De ACC En-Fr High Low High Low High Low High Low DocNMT + 12N transfer -1.85 -2.05 -2.03 -1.87 +8.67 +6.27 +10.25 +7.50 w/ BT -3.29 -4.03 -4.39 -2.93 +8.55 +6.57 +6.61 +6.19 Table 2: Relative performance to multilingual SenNMT baseline when transferring from and into high-resource (High) and low-resource (Low) languages for En Xx and Xx En translation on Europarl-7.",
"ing (Ma et al., 2021).",
"These results are promising, encouraging further research on exploring monolingual documents for multilingual DocNMT.",
"Impact of high/low-resource languages on zero-shot transfer.",
"The data distribution of Europarl-7 is highly skewed over languages, with Cs, Lt, Pl being relatively low-resource languages while De, Fi, Fr being high-resource ones.",
"Studies on multilingual SenNMT have witnessed the transfer from high-resource to low-resource languages (Aharoni et al., 2019; Zhang et al., 2020).",
"We next analyze how this data scale difference affects document-level zero-shot transfer.",
"We mainly explore 12N transfer because of the single transfer source, avoiding interference from other teacher languages.",
"transferring from high-resource teacher languages often outperforms that from low-resource ones.",
"Besides, transferring into low-resource student languages delivers better transfer than into high-resource ones.",
"These suggest that increasing the document data for teacher languages benefits zero-shot transfer.",
"Note we also provide transfer results from individual languages to De and Fr in Appendix D. Performance on Europarl-7 and IWSLT-10 We summarize the main results on both datasets in Table 3.",
"Although IWSLT-10 (N=9) includes more (distant) languages and distributes quite differently over languages, the results on IWSLT-10 resemble those on Europarl-7.",
"On both datasets, we observe that transfer, both 12N and N21, yields very positive results, particularly with document-specific metrics.",
"Unlike Europarl-7, BT-based transfer per-4182 Models Human Rating ( ) Reference 4.96 SenNMT (Baseline) 3.31 DocNMT w/ SenInfer 3.60 DocNMT w/ DocInfer 3.84 N21 Transfer w/ DocInfer 3.46 12N Transfer w/ DocInfer 2.78 N21 Transfer + BT w/ DocInfer 3.18 12N Transfer + BT w/ DocInfer 2.72 Table 4: Document-level human ratings ( ) for En-De on Europarl-7.",
"forms much worse than models trained on genuine document pairs on IWSLT-10.",
"We ascribe this to the data scarcity, where only very small-scale monolingual documents are used for BT in IWSLT-10.",
"This also reinforces our observation that more document resources benefits zero-shot transfer.",
"Apart from automatic evaluation, we also offer human evaluation on En-De.",
"We choose En-De as its WMT20 test set is intentionally constructed for DocNMT evaluation.",
"Table 4 lists the results.",
"We observe that zero-shot transfer matches and even surpasses SenNMT through N21 transfer, but fails with 12N transfer, although accuracy improvements on contrastive test sets show that both transfers are better than SenNMT.",
"We conjecture that these contrastive test sets only target a limited number of document phenomena and thus can't fully reflect the overall translation quality and represent human preference.",
"These numbers verify the feasibility of document-level zero-shot transfer through multilinguality.",
"Besides, we find that genuine parallel documents benefit the transfer slightly more than BT-based pseudo ones, and that the supervised DocNMT reaches the best result under DocInfer.",
"We surprisingly find that DocNMT with SenInfer yields very competitive performance, although no contextual information is used for decoding.",
"We also observe that such decoding tends to produce longer translations than SenNMT despite using the same decoding hyperparameters.",
"This behaviour should be shaped by the fact that DocNMT is biased towards long concatenated target references.",
"This partially agrees with the recent argument that context improves DocNMT with some sort of regularization rather than teaching the model to deal Models ACC En-Fr ACC En-De SenNMT w/ SenInfer 50.00 52.00 SenNMT w/ DocInfer 58.50 (cid:63) 50.80 DocNMT w/ SenInfer 50.00 51.90 DocNMT w/ DocInfer 64.50 66.80 Table 5: Applying DocInfer and SenInfer to DocNMT and SenNMT for contrastive evaluation.",
"with context (Kim et al., 2019).",
"On the other hand, this challenges how to properly evaluate DocNMT.",
"Another observation is that applying DocInfer to SenNMT delivers a significant accuracy improvement on En-Fr contrastive test set (+8.5%, Table 5), but slightly worse results on En-De.",
"To accurately recognize the correct translation in these test sets, models need to leverage context.",
"Such improvement might suggest that SenNMT has some limited capability of contextual modeling, but might just reflect the instability of small-scale test sets (only 200 cases in En-Fr test set, indicating a radius of around 7% for the 95% confidence interval).",
"To some extent, this devalues the improvement achieved by 12N transfer as shown in Table 3, but strengthens the success of N21 transfer (often > 9% gains).",
"This paper studies the variables playing role in achieving zero-shot document-level translation capability for languages that only have sentence level data (students), through multilingual transfer from languages that have access to document level data (teachers).",
"We make the first step in this direction by extensively exploring properties of transfer by investigating three different variables.",
"Our experiments on Europarl-7 and IWSLT-10 confirm the feasibility, where we discover that increasing document-supervised teacher languages thereby increasing the document training data size, adequately balancing between document and sentence data at training, and leveraging monolingual documents via back-translation all benefit zero-shot transfer in varying degrees.",
"The transferability of contextual modeling in DocNMT demonstrates the potential of delivering multilingual DocNMT with limited document resources.",
"Along with the success of document-level zero-shot transfer, problems with accurately estimating the document-level translation become challenging.",
"BLEU often fails to capture document phenomena, while contrastive test sets only cover few document-level aspects.",
"Neither perfectly corre-4183 lates with human evaluation.",
"Besides, whether the gains really come from contextual modeling is still unclear.",
"Our human evaluation shows some preference to DocNMT with SenInfer where context is not used for decoding at all.",
"Designing better evaluation protocols (either automatic or human) is again confirmed to be critical.",
"Besides, performing analysis beyond 12N and N21 transfer deserves more effort and it is an interesting and plausible future direction to analyze how language similarity affects the transfer.",
"We thank the reviewers for their insightful comments.",
"We want to thank Macduff Hughes and Wolfgang Macherey for their valuable feedback.",
"We would also like to thank the Google Translate team for their constructive discussions and comments."
] | [
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"method",
"method",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Recently research has started focusing on avoiding undesired effects that come with content moderation, such as censorship and overblocking, when dealing with hatred online.",
"The core idea is to directly intervene in the discussion with textual responses that are meant to counter the hate content and prevent it from further spreading.",
"Accordingly, automation strategies, such as natural language generation, are beginning to be investigated.",
"Still, they suffer from the lack of sufficient amount of quality data and tend to produce generic/repetitive responses.",
"Being aware of the aforementioned limitations, we present a study on how to collect responses to hate effectively, employing large scale unsupervised language models such as GPT-2 for the generation of silver data, and the best annotation strategies/neural architectures that can be used for data filtering before expert validation/post-editing.",
"Owing to the upsurge in the use of social media platforms over the past decade, Hate Speech (HS) has become a pervasive issue by spreading quickly and widely.",
"Meanwhile, it is difficult to track and control its diffusion, since nuances in cultures and languages make it difficult to provide a clear-cut distinction between hate and dangerous speeches (Schmidt and Wiegand, 2017).",
"The standard approaches to prevent online hate spreading include the suspension of user accounts or deletion of hate comments from the social media platforms (SMPs), paving the way for the accusation of censorship and overblocking.",
"Alternatively, to weigh the right to freedom of speech, shadow-banning has been put into use where the content/account is not deleted but hidden from SMP search results.",
"Still, we believe that we must overstep reactive identify-and-delete strategies to responsively intervene in the conversations (Bielefeldt et al., 2011; Jurgens et al., 2019).",
"In this line of action, some Non-Govermental Organizations (NGOs) train operators to intervene in online hateful conversations by writing counter-narratives.",
"A Counter-Narrative (CN) is a non-aggressive response that offers feedback through fact-bound arguments and is considered as the most effective approach to withstand hate messages (Benesch, 2014; Schieb and Preuss, 2016).",
"To be effective, a CN should follow guidelines similar to those provided in the Get the Trolls Out' project 1 , in order to avoid escalating the hatred in the discussion.",
"Still, manual intervention against hate speech is not scalable.",
"Therefore, data-driven NLG approaches are beginning to be investigated to assist NGO operators in writing CNs.",
"As a necessary first step, diverse CN collection strategies have been proposed, each of which has its advantages and shortcomings (Mathew et al., 2018; Qian et al., 2019; Chung et al., 2019).",
"In this study, we aim to investigate methods to obtain high quality CNs while reducing efforts from experts.",
"We first compare data collection strategies depending on the two main requirements that CN datasets must meet:",
"(i) data quantity and",
"(ii) data quality.",
"Finding the right trade-off between the two is in fact a key element for an effective automatic CN generation.",
"To our understanding none of the collection strategies presented so far is able to fulfill this requirement.",
"Thus, we test several hybrid strategies to collect data, by mixing niche-sourcing, crowd-sourcing, and synthetic data generation obtained by fine-tuning deep neural architectures specifically developed for NLG tasks, such as GPT-2 (Radford et al., 2019).",
"We propose using an author-reviewer framework in which an author is tasked with text generation and a reviewer 1 http://stoppinghate.getthetrollsout.org/ can be a human or a classifier model that filters the produced output.",
"Finally, a validation/post-editing phase is conducted with NGO operators over the filtered data.",
"Our findings show that this framework is scalable allowing to obtain datasets that are suitable in terms of diversity, novelty, and quantity.",
"We briefly focus on three research aspects related to hate online, i.e. available datasets, methodologies for detection, and studies on the effectiveness of the textual intervention.",
"In the following section instead, we will focus on a few methodologies specifically devoted to HS-CN pairs collection.",
"Hate datasets.",
"Several datasets have been collected from SMPs including Twitter (Waseem and Hovy, 2016; Waseem, 2016; Ross et al., 2017), Facebook (Kumar et al., 2018), WhatsApp (Sprug-noli et al., 2018), and forums (de Gibert et al., 2018), in order to perform hate speech classification (Xiang et al., 2012; Silva et al., 2016; Del Vi-gna et al., 2017; Mathew et al., 2018).",
"Hate detection.",
"Most of the research on hatred online focuses on hate speech detection (Warner and Hirschberg, 2012; Silva et al., 2016; Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018) employing features such as lexical resources (Gitari et al., 2015; Burnap and Williams, 2016), sentiment polarity (Burnap and Williams, 2015) and multimodal information (Hosseinmardi et al., 2015) to build a classifier.",
"Hate countering.",
"Recent work has proved that counter-narratives are effective in hate countering (Benesch, 2014; Silverman et al., 2016; Schieb and Preuss, 2016; Stroud and Cox, 2018; Mathew et al., 2019).",
"Several CN methods to counter hatred are outlined and tested by Benesch (2014), Munger (2017), and Mathew et al. (2019).",
"Three prototypical strategies to collect HS-CN pairs have been presented recently.",
"Crawling (CRAWL) .",
"Mathew et al. (2018) focus on the intuition that CNs can be found on SMPs as responses to hateful expressions.",
"The proposed approach is a mix of automatic HS collection via linguistic patterns, and a manual annotation of replies to check if they are responses that counter the original hate content.",
"Thus, all the material collected is made of natural/real occurrences of HS-CN pairs.",
"Crowdsourcing (CROWD) .",
"Qian et al. (2019) propose that once a list of HS is collected from SMPs and manually annotated, we can briefly instruct crowd-workers (non-expert) to write possible responses to such hate content.",
"In this case, the content is obtained in controlled settings as opposed to crawling approaches.",
"Nichesourcing (NICHE) .",
"The study by Chung et al. (2019) still relies on the idea of outsourcing and collecting CNs in controlled settings.",
"However, in the nichesourcing the CNs are written by NGO operators, i.e. persons specifically trained to fight online hatred via textual responses that can be considered as experts in CN production.",
"Regardless of the HS-CN collection strategy, datasets must meet two criteria: quality and quantity .",
"While quantity has a straightforward interpretation, we propose that data quality should be decomposed into conformity (to NGOs guidelines) and diversity (lexical & semantic).",
"Additionally, HS-CN datasets should not be ephemeral, which is a structural problem with crawled data since, due to copyright limitations, datasets are usually distributed as a list of tweet IDs (Klubicka and Fernandez, 2018).",
"By generating the data through crowdsourcing or nichesourcing, the problem is avoided.",
"Quantity.",
"While the CRAWL dataset is very small and ephemeral, representing more a proof of concept than an actual dataset, the CROWD dataset involved more than 900 workers to produce 41 K CNs.",
"On the other hand, the NICHE dataset is constructed by the participation of 100 expert-operators to obtain 4 K pairs (in three languages) and resorted to HS paraphrasing and pair translation to obtain the final 14 KHS-CN pairs.",
"Evidently, employing non-experts, e.g, crowdworkers or annotators, is preferable in terms of data quantity.",
"Quality.",
"In terms of quality, we consider that diversity is of paramount importance, since verbatim repetition of arguments can become detrimental for operator credibility and for the CN intervention itself.",
"Following Li et al. (2016a), we distinguish between",
"(i) lexical diversity and",
"(ii) semantic diversity .",
"While lexical diversity focuses on the diversity in surface realization of CNs and can be captured by word overlapping metrics, semantic diversity focuses on the meaning and is harder to be captured, as in the case of CNs with similar meaning but different wordings (e.g., Any source? vs. Do you have a link? ).",
"(i) Semantic Diversity & Conformity .",
"To model semantic diversity and conformity, we focus on the CN argument' types that are present in various datasets.",
"Argument types are useful in assessing content richness (Hua et al., 2019).",
"In a preliminary analysis, CROWD CNs are observed to be simpler and mainly focus on denouncing' the use of profanity while NICHE CNs are found richer with a higher variety of arguments.",
"On the other hand, CRAWL CNs can cover diverse arguments to a certain extent while being highly prone to contain profanities.",
"To perform a quantitative comparison, we randomly sampled 100 pairs from each dataset and annotated them according to the CN types presented by Benesch et al. (2016), which is the most comprehensive CN schema to our knowledge.",
"The results are reported in Table 1. For the sake of conciseness we focus on the hostile , denouncing , and consequences classes, giving other to all remaining types (including the fact class).",
"Clearly, CRAWL does not meet the conformity standards of CNs considering the vast amount of hostile responses (50%), still granting a certain amount of type variety ( other: 34%).",
"Contrarily, CROWD conforms to the CN standards ( hostile: 0%), yet mostly focuses on pure denouncing (76 % ) or denouncing with simple arguments (10 % ).",
"The class other (14 % ) consists of almost only simple arguments, such as All religions deserve tolerance .",
"In NICHE instead, arguments are generally and expectedly more complex and articulated, and represent the vast majority of cases (81%).",
"Few examples of CN types are given in Table 2.",
"(ii) Lexical Diversity .",
"The Repetition Rate (RR) is used to measure the repetitiveness of a collection of texts, by considering the rate of non-singleton n-gram types it contains (Cettolo et al., 2014; Bertoldi et al., 2013).",
"We utilize RR instead of the simple count of distinct ngrams (Xu et al., 2018; Li et al., 2016b) or the standard type/token ratio (Richards, 1987) since it allows us to compare corpora of diverse sizes by averaging the statistics collected on a sliding window of 1000 words.",
"Since CROWD and NICHE contain repeated CNs for different HSs 2 , we first removed repeated CNs and then applied a shuffling procedure to avoid that CNs that are answering to the same HS (so more likely to contain repetitions) appear close together.",
"Results in Table 1 show that NICHE is the dataset with more lexical diversity (lower RR), followed by CRAWL and CROWD.",
"Discussion.",
"We can reasonably conclude that:",
"(i) crawling, as presented in (Mathew et al., 2018), is not a mature procedure yet for CN collection, even if it is promising,",
"(ii) nichesourcing is the one producing the best and most diverse material by far, however it is also the most challenging to implement considering the difficulty of making agreements with NGOs specialized in CN creation and it does not provide sufficient amount of data.",
"(iii) On the contrary, CROWD seems to be the only one that can grant the amount of data that is needed for deep learning approaches, but contains more simple and stereotyped arguments.",
"A summary of the pros and cons of each collection approach is presented in Table 3. 5 CN Generation through Author-Reviewer Architecture Since none of the aforementioned approaches alone can be decisive for creating proper CN datasets, we propose a novel framework that combines crowdsourcing and nichesourcing to obtain new quality data while reducing collection cost/effort.",
"The key elements of this combination are:",
"(i) there must be an external element in the framework that produces HS-CN candidates,",
"(ii) non-experts should pre-filter the material to be presented/validated by experts.",
"Thus, we settle on the author-reviewer modular architecture (Oberlander and Brew, 2000; Manu-rung et al., 2008).",
"In this architecture the author has the task of generating a text that conveys the correct propositional content (a CN), whereas the reviewer must ensure that the author's output satisfies certain properties.",
"The reviewer finally evaluates the text 2 While this is an explicit data augmentation choice in NICHE, for CROWD it seems to derive from writing the same CNs for similar HSs by crowd-workers.",
"Hostile Hell is where u belong! Stupid f***t... go hang yourself!!",
"Denouncing The N word is unacceptable. Please refrain from future use.",
"Fact The majority of sexual assaults are committed by a family member, friend, or partner of the victim, and only 12% of convicted rapists are Muslim. It is not the religion, its the individuals, whether they're Muslim or not.",
"viability and picks the ones to present to the NGO operators for final validation/post-editing.",
"The author-reviewer architecture that we propose differs from the previous studies in two respects:",
"(i) it is used for data collection rather than for NLG,",
"(ii) we modified the original configuration by adding a human reviewer and a final post-editing step.",
"We first tested four different author configurations, then three reviewer configurations keeping the best author configuration constant.",
"A representation of the architecture is shown in Figure 1. 6 The Author: Generation Approaches In order to obtain competent models that can provide automatic counter-narrative hints and suggestions to NGO operators, we have to overcome the data bottleneck/limitations, i.e. either the limited amount of training data in NICHE or its repetitiveness in CROWD, especially for using neural NLP approaches.",
"Since pre-trained Language Models (LMs) have achieved promising results when fine-tuned on challenging generation tasks such as chit-chat dialog (Wolf et al., 2019; Golovanov et al., 2019), we propose using a recent large-scale language model GPT-2 (Radford et al., 2019).",
"GPT-2 is an unsupervised transformer-based (Vaswani et al., 2017) LM trained on a dataset of 8 million web pages, capable of generating coherent text and can be fine-tuned and/or conditioned on various NLG tasks.",
"We used the medium model, which was the largest available during our experimentation and contains 345 million parameters, with 24 layers, 16 attention heads, and hidden state size of 1024.",
"We fine-tuned two models with GPT2, one on NICHE and one on CROWD datasets for counter-narrative generation.",
"NICHE Training and test data.",
"We have split 5366 pairs of HS-CN for training and the rest (1288 pairs) for testing.",
"In particular, the original HS-CN pairs, one HS paraphrase, and the pairs translated from FR and IT were kept for training while the other HS paraphrases were used for testing.",
"See Chung et al. (2019) for further details.",
"CROWD Training and test data.",
"Although the CROWD dataset was created for dialogue level HS-CN, we could extract HS-CN pairs by selecting the dialogues in which only 1 utterance was labeled as HS.",
"Therefore, we could guarantee that the crowd-produced CNs are exactly for the labeled utterance.",
"We then applied a 80/20 training and test split, obtaining 26320 and 6337 pairs.",
"Generation Models.",
"We fine-tuned GPT-2 3 , with a batch size of 1024 tokens and a learning rate of 2e-5.",
"The training pairs are represented as [ HS start token ] HS [ HS end token ] [ CN start token ] CN [ CN end token ] .",
"While we empirically selected model checkpoint at the 3600 th step of fine-tuning with NICHE dataset, with CROWD dataset we selected the checkpoint at the 5000 th step.",
"After fine-tuning the models the generation of CNs for the test HSs has been performed using Nucleus Sampling (Holtzman et al., 2019) with a p value of 0.9, which provides an enhanced diversity on the generation in comparison to the likelihood maximization decoding methods while preserving the coherency by truncating the less reliable tail of the distribution.",
"At the test time, the input HSs are fed into models as conditions, which are used as the initial contexts while sampling the next tokens.",
"Given an input HS, the models produce a chunk of text which is a list of HS-CN pairs of which the first sequence marked with [ CN start token ] CN [ CN end token ] is the generated output.",
"Baselines.",
"In addition to the fine-tuned GPT-2 models, we also evaluate two baseline models.",
"Considering the benefits of the transformer architectures on parallelization and learning long-term dependencies over recurrent models (Vaswani et al., 2017), we have implemented the baseline models using transformer architecture.",
"The models have been trained similar to the base model described by Vaswani et al. (2017) with 6 transformer layers, batch size of 64, 100 epochs, 4000 warmup steps, input/output dimension of 512, 8 attention heads, inner-layer dimension of 2048, and drop-out rate of 0.1.",
"We used Nucleus Sampling also for the baselines with a p value of 0.9 during decoding.",
"In brief, we have trained four different configu-3 We adopted the fine-tuning implementation from https: //github.com/nshepperd/gpt-2 rations/models as authors: 1. TRF crowd : baseline on CROWD dataset 2. GPT crowd : fine-tuned GPT-2 on CROWD dataset 3. TRF niche : baseline on NICHE dataset 4. GPT niche : fine-tuned GPT-2 on NICHE dataset Metrics.",
"We report both standard metrics (BLEU (Papineni et al., 2002), BertScore (Zhang et al., 2019)) concerning the lexical and semantic generation performances and a specific Diversity metric (RR) regarding the generation quality.",
"As a second quality metric, we report Novelty (Wang and Wan, 2018) based on Jaccard similarity function (a variant of the same metric is used also by Dziri et al. (2019)).",
"While diversity is used to measure the ability of the model to produce diverse/varied responses with respect to the given input HS, novelty is used to measure how different the generated sequences are with regard to the training corpus (Wang and Wan, 2018).",
"Results.",
"Results of the author model experiments are shown in Table 4. In terms of BLEU and BertScore, baseline models yield a better performance.",
"However, a few peculiarities of CN generation task and the experiment settings hinder the direct and objective comparison of the presented scores among the models.",
"First, gathering a finite set of all possible counter-narratives for a given hate speech is a highly unrealistic target.",
"Therefore, we have only a sample of proper CNs for each HS, which is a possible explanation of very low scores using the standard metrics.",
"Second, the train-test splits of NICHE dataset contain same CNs since the splitting has been done using one paraphrase for each HS and its all original CNs, while CROWD train-test splits have a similar property since an exact same CN can be found for many different HSs.",
"Consequently, the non-pretrained transformer models, which are more prone to generating an exact sequence of text from the training set, show a relatively better performance with the standard metrics in comparison to the advanced pre-trained models.",
"Some randomly sampled CNs, generated by the various author configurations, are provided in Appendix.",
"Regarding the generation quality, we observe that baseline models cannot achieve the diversity achieved by GPT-2 models in terms of RR both for NICHE and CROWD (4.89 vs 3.23, and 8.93 vs. 5.89).",
"Moreover, GPT-2 provides an impressive boost in novelty (0.04 vs 0.46 and 0.10 vs 0.70).",
"Among the GPT-2 models, the quality scores (in terms of RR and novelty) of the CNs generated by GPT niche are more than double in comparison to those generated with GPT crowd .",
"With regard to the overall results, GPT niche is the most promising configuration to be employed as author.",
"In fact, we observed that, after the output CN, the over-generated chunk of text consists of semantically coherent brand-new HS-CN pairs, marked with proper HS/CN start and end tokens consistent with the training data representation.",
"Therefore, on top of CN generation for a given HS, we can also take advantage of the over-generation capabilities of GPT-2, so that the author module can continuously output plausible HS-CN pairs without the need to provide the HS to generate the CN response.",
"This expedient allows us to avoid the ephemerality problem for HS collection as well.",
"To generate HS-CN pairs with the author module, we basically exploited the model test setting and conditioned the fine-tuned model with each HS in the NICHE test-set.",
"After removing the CN output for the test HS, we could obtain new pairs of HS-CN.",
"In this way, we generate 2700 HS-CN pairs that we used for our reviewer-configuration experiments.",
"The task of the reviewer is a sentence-level Confidence Estimation (CE) similar to the one of Machine Translation (Blatz et al., 2004).",
"In this task, the reviewer must decide whether the author output is correct/suitable for a given source text, i.e. a hate speech.",
"Consistently with the MT scenario, one application of CE is filtering candidates for possible human post-editing, which is conducted by the NGO operator by validating the CN.",
"We tested three reviewer configurations: 1. expert-reviewer : Author output is directly presented to NGO operators.",
"2. non-expert-reviewer : Author output is filtered by human reviewers, then validated by operators.",
"3. machine-reviewer : Filtering is done by a classifier neural-architecture before operator validation.",
"Setup.",
"We administered the generated 2700 HS-CN pairs to three non-expert annotators, and instructed them to evaluate each pair in terms of CN suitableness' with regard to the corresponding hate speech.",
"Instructions.",
"We briefly described what an appropriate and suitable CN is, then we instructed them not to overthink during the evaluation, but to give a score based on their intuition.",
"We also provided a list of 20 HS-CN pairs exemplifying the proper evaluation.",
"Measurement.",
"We opted for a scale of 0-3, rather than a CE binary response, since it allows us to study various thresholds for better data selection.",
"In particular, the meanings of the scores are as follows: 0 is not suitable; 1 is suitable with small modifications, such as grammar or semantic; 2 is suitable; and 3 is extremely good as a CN.",
"We also ask to discard the pairs in which the hate speech was not well formed.",
"For each pair we gathered two annotator scores.",
"Filtered Data.",
"After the non-expert evaluation, we applied two different thresholds to obtain the pairs to be presented to the expert operators:",
"(i) at least a score of 2 by both annotators (Reviewer 2 ) yielding high quality data where no post editing is necessary,",
"(ii) at least a score 1 by both annotators (Reviewer 1 ) providing reasonable quality with a possible need for post-editing.",
"The statistics reported in Table 5 show that high quality pairs (Reviewer 2 ) account for only a small fraction (10%) of the produced data and only one third was of reasonable quality (Reviewer 1 ), while the vast majority was discarded.",
"Some randomly selected filtered pairs are provided in Appendix.",
"As the machine reviewer we implemented 2 neural classifiers tasked with assessing whether the given HS-CN is a proper data pair.",
"The two models are based on BERT (Devlin et al., 2019) and ALBERT (Lan et al., 2019) architectures.",
"Training data.",
"We created a balanced dataset with 1373 positive and 1373 negative examples for training purposes.",
"The positive pairs come both from NICHE dataset and from the examples annotated in the human reviewer setting (Reviewer 2 ).",
"The negative pairs consist of the examples annotated in the human reviewer setting, in the at least one 0' bin.",
"In addition, 50 random HSs from NICHE-training are utilized with verbatim repetition as HS-HS to discourage the same text for both HS and CN in a pair, and 50 random HSs are paired with other random HSs simulating the condition of inappropriate CNs with hateful text.",
"Test data.",
"We collected a balanced test set, with 101 positive and 101 negative pairs.",
"Both positive and negative examples are created replicating the non-expert reviewer annotation described in Section 7.1 for new CN generation with NICHE test set by using the author model GPT niche .",
"Models.",
"For the first model, we follow the standard sentence-pair classification fine-tuning schema of the original BERT study.",
"First, the input HS-CN is represented as [ CLS ] HS tokens [ SEP ] CN tokens [ SEP ] and fed into BERT.",
"By using the final hidden state of the first token [CLS] as the input, originally denoted as C RH , we obtain a fixed-dimensional pooled representation of the input sequence.",
"Then, a classification layer is added with the parameter matrix W RKH , where K denotes the number of labels, i.e. 2 for HS-CN classification.",
"The cross-entropy loss has been used during the fine-tuning.",
"We have conducted a hyperparameter tuning phase with a grid-search over the batch sizes 16 and 32, the learning rates [4,3,2,1]e-5 and the number of epochs in the range of 3 to 8.",
"We obtained the best model by fine-tuning uncased BERT-large, with a learning rate of 1e-5, batch size of 16, and after 6 epochs at the 1029 th step on a single GPU.",
"The second model is built by fine-tuning ALBERT, which shows better performance than BERT on inter-sentence coherence prediction by using a sentence-order prediction loss instead of next-sentence prediction.",
"In sentence-order prediction loss, while the positive examples are created similar to BERT by using the consecutive sentences within the same document, the negative examples are created by swapping sentences, which leads the model to capture the discourse-level coherence properties better (Lan et al., 2019).",
"This objective is particularly suitable for HS-CN pair classification task, since HS and CN order and their coherence are crucial for our task.",
"We fine-tuned ALBERT similarly to BERT model, by adding a classification layer on top of it.",
"We applied the same grid-search that we used for BERT model to fine-tune ALBERT-xxlarge which contains 235M parameters.",
"We saved a checkpoint at every 200 steps and finally, obtained the best model by using the learning rate of 1e-5, the batch size of 16, and at the 1200 th step.",
"4 Metrics .",
"To find the best model for machine reviewer, we compared BERT and ALBERT models over the test set.",
"Although it seems more intuitive to focus on precision since we search for an effective filtering over many possible solutions, we observed that a model with a very high precision tends to overfit on generic responses, such as Evidence please? .",
"Therefore, we aim to keep the balance between the precision and recall and we opted for F1 score for model selection.",
"We report the best configurations for each model in Table 6, and the percentage of filtered pairs in Table 5.",
"ALBERT classifier outperformed BERT model in all three metrics; F1, Precision, and Recall.",
"Considering 6% of absolute F1 score improvement with respect to BERT model, we employed ALBERT model as the Machine Reviewer.",
"To verify that the author-reviewer approach can boost HS-CN data collection, we run an experiment with 5 expert operators from an NGO.",
"We compared the filtering strategies to reveal the best depending on several metrics.",
"4 All the experiments have been conducted on a single GeForce RTX 2080 Ti GPU.",
"Only the ALBERT classifier model has been trained with 8 TPU cores on Google Cloud.",
"Within Subject Design.",
"We administered lists of HS-CN pairs to 5 operators from each filtering condition, and instructed them to evaluate/modify each pair in terms of suitableness' of the CN to the corresponding HS.",
"Instructions.",
"For each HS-CN pair, we asked the operators:",
"a) if the CN is a perfect answer, to validate it without any modification,",
"b) if the CN is not perfect, but a good answer can be obtained with some editing, to modify it,",
"c) if the CN is completely irrelevant and/or needs to be completely rewritten to fit the given HS, to discard it.",
"Measurement.",
"The main goal of our effort is to reduce the time needed by experts to produce training data for automatic CN generation.",
"Therefore the primary evaluation measure is the average time needed to obtain a proper pair.",
"The other measurements of interest are Diversity and Novelty, to understand how the reviewing procedure can affect the variability of the obtained pairs.",
"Procedure and material.",
"We gave the instructions along with a list of 20 HS-CN exemplar pairs for each condition (i.e. Reviewer 1 , 2 , machine , expert ).",
"The condition order was randomized to avoid primacy effect.",
"In total, each NGO operator evaluated 80 pairs.",
"Pairs were sampled from the pool of 2700 pairs described before (apart from the automatic filtering condition).",
"To guarantee that the sample was representative of the corresponding condition, we performed a stratified sampling and avoided repeating pairs across subjects.",
"Results and Discussion.",
"As it is shown in Table 7, there is a substantial decrease in data collection time (NGO time ) when automatic generation mechanisms are introduced (no suggestion vs. Reviewer expert ).",
"If crowd filtering is applied (Reviewer 1 , 2 ), the amount of time can be further reduced, and the more stringent the filtering criterion, the higher the time saved.",
"Conversely, the more stringent the filtering criterion, the higher the time to obtain a filtered pair from non-expert annotators (CROWD time ).",
"For instance to obtain a single pair with at least a score of 2 by both annotators, 700 sec (around 12 min) are needed on average (only 10% of examples are in 2 condi-tion).",
"Results indicate that providing an automatic generation tool meets the first goal of increasing efficiency of the operators in data collection.",
"Regarding diversity and novelty metrics, pre-filtering author's output (Reviewer 1 , 2 and machine ) has a negative impact: the more stringent the filtering condition the higher the RR and the lower the novelty of the filtered CNs.",
"We performed some manual analysis of the selected CNs and we observed that especially for the Reviewer 2 case (which was the most problematic in terms of RR and novelty) there was a significantly higher ratio of generic responses, such as This is not true. or How can you say this about an entire faith? , for which reviewers agreement is easier to attain.",
"Therefore, the higher agreement on the generic CNs reveals itself as a negative impact in the diversity and novelty metrics.",
"Conversely, the percentage of pre-filtered pairs that are accepted by the expert increases with the filtering condition becoming more stringent, the baseline being 45% for the Reviewer expert condition.",
"As for the amount of operators' effort, we observed a slight decrease in HTER 5 with the in-crease of pre-filtering conditions, indicating an improvement in the quality of candidates.",
"However, HTER scores were all between 0.1 and 0.2, much below the 0.4 acceptability threshold de-fined by Turchi et al. (2013), indicating that operators modified CNs only if easily amendable.",
"5 Human-targeted Translation Edit Rate is a measure of post-editing effort at sentence level translations (Specia and Farzindar, 2010).",
"Finally, we observe that despite reducing the ouput diversity and novelty, the reduction of expert effort by Reviewer 2 in terms of the percentage of the obtained pairs is not attainable by a machine yet.",
"On the other hand, automatic filtering (Reviewer machine ) is a viable solution since",
"(i) it helps the NGO operators save time better than human filter 1,",
"(ii) it preserves diversity and novelty better than Reviewer 2 and in line with Reviewer 1 .",
"To counter hatred online and avoid the undesired effects that come with content moderation, intervening in the discussion directly with textual responses is considered as a viable solution.",
"In this scenario, automation strategies, such as natural language generation, are necessary to help NGO operators in their countering effort.",
"However, these automation approaches are not mature yet, since they suffer from the lack of sufficient amount of quality data and tend to produce generic/repetitive responses.",
"Considering the aforementioned limitations, we presented a study on how to reduce data collection effort, using a mix of several strategies.",
"To effectively and efficiently obtain varied and novel data, we first propose the generation of silver counter-narratives using large scale unsupervised language models then a filtering stage by crowd-workers and finally an expert validation/post-editing.",
"We also show promising results obtained by replacing crowd-filtering with an automatic classifier.",
"As a final remark, we believe that the proposed framework can be useful for other NLG tasks such as paraphrase generation or text simplification.",
"This work was partly supported by the HATEME-TER project within the EU Rights, Equality and Citizenship Programme 2014-2020.",
"We are grateful to Stop Hate UK that provided us with the experts for the evaluation.",
"Finally, there are also many people we would like to thank for their help and useful suggestions: Eneko Agirre, Simone Magnolini, Marco Turchi, Sara Tonelli and the anonymous reviewers among others."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"result",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"objective",
"other",
"other",
"other"
] |
[
"Typical fact verification models use retrieved written evidence to verify claims.",
"Evidence sources, however, often change over time as more information is gathered and revised.",
"In order to adapt, models must be sensitive to subtle differences in supporting evidence.",
"We present VITAMINC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes.",
"We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs.",
"Unlike previous resources, the examples in VITAMINC are contrastive , i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not.",
"We show that training using this design increases robustnessimproving accuracy by 10% on adversarial fact verification and 6% on adversarial natural language inference (NLI).",
"Moreover, the structure of VITAMINC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
"1 1 Introduction Determining the truthfulness of factual claims by comparing them to textual sources of evidence has received intense research interest in recent years.",
"An underlying, but often overlooked, challenge for this paradigm, however, is the dynamic nature of today's written resources.",
"An extraordinary amount of new information becomes available daily; as a result, many consequential facts are established, changed, or added to over time.",
"We argue that the quality of fact verification systems should be 1 The VITAMINC dataset and our models are available at: https://github.com/TalSchuster/VitaminC its population is estimated to be 86,205, almost 14% more than the 2000 census figure of 76,129.",
"measured by how well they adjust to new evidence.",
"In this way, we seek to advance fact verification by requiring that models remain reliable and robust to the change present in practical settings.",
"To this end, we focus on fact verification with contrastive evidence .",
"That is, we infuse the standard fact verification paradigm with challenging cases that require models to be sensitive to factual changes in their presented evidence (hereon referred to interchangeably as context).",
"We present VITAMINC, 2 a new large-scale fact verification dataset that is based on factual revisions to Wikipedia.",
"The key concept is exemplified in Figure 1: there a factual revision yields a contrastive pair of contexts that are nearly identical in language and contentexcept that one context refutes the given claim, while the other supports it.",
"This type of contrastive structure exposes existing deficiencies in model behavior.",
"To illustrate this, we train a classifier on the popular FEVER fact verification dataset (Thorne et al., 2018) and evaluate it on contrastive claim-evidence pairs.",
"We find that the model flips its prediction from the original verdict on only 56% of the contrastive cases.",
"When examples from VITAMINC are included during training, however, the model's sensitivity increases, flipping on 86% of contrastive cases.",
"Such context-sensitive inference has two main benefits.",
"First, it ensures that the model consid-2 Etymology of VITAMINC: Contrastive evidence keeps fact verification models robust and healthy, hence Vitamin C. ers the provided evidence rather than relying on built-in static knowledge, such as that obtained via language model pre-training (Petroni et al., 2019; Roberts et al., 2020).",
"This is particularly important for scenarios in which the source of truth is mutable (e.g., the current US president, or new declarations as in Figure 1).",
"Second, this setting discourages certain biases and idiosyncrasiessuch as exploiting differences in how true vs. false claims are posedthat are common in similar crowd-sourced datasets (Poliak et al., 2018; Schuster et al., 2019).",
"Indeed, we show that augmenting both fact verification models and NLI models with VITAMINC data improves their robustness to adversarial inputs.",
"Furthermore, our emphasis on contrastive contexts allows us to expand on the scope of commonly considered tasks.",
"Most of the fact verification literature focuses on resolving claims to be true or false (Popat et al., 2018; Thorne and Vlachos, 2018; Wang, 2017).",
"The surrounding ecosystem, however, includes additional challenges, some of which we explore here: Documents such as Wikipedia articles are updated frequently; which edits represent factual changes?",
"For a given claim and (refuting or supporting) evidence pair, which words or phrases in the evidence are most relevant?",
"If we know that a certain claim is true, can we modify an out-dated document to be consistent with it?",
"We show that the unique structure of our VITAMINC dataset can be leveraged to provide both supervised and distantly supervised data for these new questions.",
"Our key contributions are as follows:",
"1. We pose a contrastive fact verification paradigm that requires sensitivity to changes in data;",
"2. We introduce VITAMINC, a new large-scale dataset that supports this paradigm;",
"3. We demonstrate that training on VITAMINC leads to better performance on standard tasks;",
"4. We show how VITAMINC opens the door to additional research directions in fact verification.",
"Fact Verification.",
"The FEVER dataset (Thorne et al., 2018) fueled the development of many fact-checking models (e.g., see Hanselowski et al., 2018; Nie et al., 2019a,b; Yoneda et al., 2018, inter alia ).",
"The claim creation process, however, required crowd-workers to write claims related to Wikipedia articles, and was found to engender biases that allow an evidence-agnostic model to achieve unexpectedly high performance (Schus-ter et al., 2019).",
"Other recent datasets cover verification against tables (Chen et al., 2020), relational databases (Jo et al., 2019), Wikipedia references (Sathe et al., 2020), multiple articles (Jiang et al., 2020), and search snippets (Augenstein et al., 2019).",
"These resources all assume static ground truths.",
"In contrast, VITAMINC compares objective claims to a dynamic source of truth, and requires models to change their verdicts accordingly.",
"Annotation Bias.",
"Annotation artifacts are common in many NLP datasets, and affect performance on adversarial and contrastive examples (Gardner et al., 2020; Ribeiro et al., 2020; Ross et al., 2020).",
"Sentence-pair inference tasks such as fact verification (Paul Panenghat et al., 2020; Schuster et al., 2019) and NLI (Gururangan et al., 2018; McCoy et al., 2019; Poliak et al., 2018; Tsuchiya, 2018) are no exception.",
"Alleviating this bias requires either modeling solutions (Karimi Mahabadi et al., 2020; Pratapa et al., 2020; Shah et al., 2020; Thorne and Vlachos, 2020; Utama et al., 2020b), which have limited effectiveness (Utama et al., 2020a), or adversarially removing troublesome training examples (Bras et al., 2020) or manually collecting new ones (Nie et al., 2020; Thorne et al., 2019a), which is model specific.",
"Instead, our dataset design avoids single-sentence artifacts and provides model-agnostic challenging examples that increase the robustness of trained models.",
"Explainability.",
"Current fact verification datasets provide sentence-level rationales (DeYoung et al., 2020; Petroni et al., 2020) but do not enforce the model's verdict to rely on themleading to a potential discrepancy.",
"VITAMINC ensures the verdict is conditioned on the retrieved evidence.",
"Moreover, we use the revision history as distant supervision for word-level rationales, allowing for finer-grained explanations (Camburu et al., 2018; Lei et al., 2016; Portelli et al., 2020; Thorne et al., 2019b).",
"Factually Consistent Generation.",
"Generating texts that match given facts is a known challenge (Fan et al., 2020; Kryscinski et al., 2020; Lewis et al., 2020b; Parikh et al., 2020; Shah et al., 2020; Tian et al., 2020) as language models tend to degenerate and hallucinate (Holtzman et al., 2020; Schuster et al., 2020; Zhou et al., 2020).",
"Moreover, evaluation is non-trivial, and usually manual.",
"VITAMINC includes supervised data for training sequence-to-sequence models, and provides automatic evaluation via the fact verification classifier.",
"VITAMINC (abbreviated VitC) is based on revisions to English Wikipedia.",
"Wikipedia has become a comprehensive online resource that is rigorously maintained by a large and active community (Ben-jakob and Harrison, 2019).",
"While adversaries do try to insert disinformation, popular pages are usually quickly corrected (Kumar et al., 2016).",
"Furthermore, Wikipedia's policies dictate that its content should be written from a neutral perspective or should otherwise objectively state all points of view.",
"3 These properties make Wikipedia a suitable source of evidence for fact verification models.",
"In the following section, we outline our process for mining factual revisions from Wikipedia.",
"We collected the 5K most-viewed English Wikipedia articles 4 as of January 2020, along with any additional articles referred from them (on average 100 per article).",
"We also included all articles from the FEVER dataset (Thorne et al., 2018).",
"For each article, we retrieved up to 500 of its most recent revisions.",
"In May 2020, we added all COVID-19 related articles 5 and all of their 41K revisions at the time.",
"Combined together, this resulted in a total of 200 million revisions.",
"For each revision, we identified all of the modified sentences and stored two versions: (1) before, and (2) after the edit.",
"In our task, we are only interested in edits made with an intent to introduce a factual modification i.e., a change for which one can make a claim that is supported by one sentence, but not by the other.",
"6 To expedite annotation, we trained a BERT classifier (Devlin et al., 2019) on a small labeled set of revised sentences determined to be factual (Yang et al., 2017), and used this model to select the top 305K edited sentences from the corpus for manual annotation.",
"Trained human annotators were then presented with the sentence pairs, and were asked to mark the ones that indeed represented a factual change.",
"Sentences lacking self-contained context were filtered (e.g., short expressions from tables or bulleted lists).",
"Example annotations are presented in Table",
"1. Note that these annotations can also be 3 https://bit.ly/Wiki_Neutral_POV 4 https://bit.ly/Wiki_popular_pages 5 https://wikimediafoundation.org/covid19 6 Many edits only reflect grammatical corrections, paraphrasing, or Wikification (text formatting/page linking).",
"recursively recycled for re-training the automated BERT classifier in the future to expand the corpus further (we also introduce this as a task, see 4.1).",
"The factual Wikipedia revisions guide us in creating challenging claims for fact verification.",
"For each revision, annotators were asked to write two symmetric claims related to the same edit:",
"1. The first should be supported by the original sentence and refuted by the revised sentence;",
"2. The second should be supported by the revised sentence and refuted by the original sentence.",
"When an explicit contradiction was not possible, a not enough information (NEI) relation was used.",
"A group of 70 native English speakers 7 wrote and reviewed claims.",
"During the annotation period, annotations were delivered in weekly batches, from which we examined random samples to provide feedback and request corrections.",
"Annotators were instructed to write short and self-contained claims.",
"Furthermore, annotators were instructed to avoid copying exact phrases and values when possible, in order to avoid a bias for substantially higher word overlap in supporting pairs over refuting pairs.",
"For example, rather than stating, there are x confirmed cases of coronavirus in the US , one can write there are more than z confirmed cases of coronavirus in the US , which is supported if x > z and refuted otherwise.",
"For revisions that only add new information or that remove outdated facts without replacing them, annotators wrote a single claim.",
"Naturally, the real Wikipedia revisions we collect mostly describe facts that frequently change over time, or that are prone to mistakes and corrections (such as quantitative values, see Appendix A.1) (Faruqui et al., 2018; Yang et al., 2017).",
"Sensitivity to contrastive contexts, however, is desirable behavior for any claim.",
"This can both ensure consistency with external sources of truth, and improve the model's faithfulness via connecting the verdict with a specific evidence (Jacovi and Goldberg, 2020; Ross et al., 2020).",
"For example, we require the model to not only classify the claim Tom Hanks was honored by a president as true, but to also change its verdict to false if paired with a (fictional) contrasting evidence.",
"As a result, we can verify that the model prioritizes sentence-pair inference over 7 We sourced our annotators through TransPerfect.",
"memorization, which can help it generalize better.",
"Therefore, we use the FEVER dataset to augment VITAMINC with synthetic revisions to Wikipedia sentences.",
"We follow the setting of Schuster et al. (2019) to expand claim-evidence pairs from FEVER (Thorne et al., 2018).",
"Specifically, given a false claim from FEVER, we ask annotators to edit the sentence that refutes it so that it will then support the originally false claim.",
"Additionally, we ask them to write a new claim that is refuted by the new, modified sentence, but that is supported by the original version.",
"Following this method, we obtain two claims where each can be supported or refuted by the original, or the synthetically revised, sentence.",
"We follow the same process for constructing synthetic examples using true claims, but with flipped labels.",
"In total, 304,671 revised Wikipedia sentences were examined by annotators, of which 107,056 (35%) were found to express a factual modification and were passed to the group of expert annotators for claim writing.",
"As two symmetric claims with opposing facts were created (when possible) for each revision, this resulted in a total of 325,724 total claim-evidence pairs.",
"We collected 163,180 addi-Supports Refutes NEI Split Real Syn Real Syn Real Syn Train 124,864 60,850 71,108 60,850 52,981 Dev 21,102 10,382 12,146 10,382 9,042 Test 17,306 10,358 9,907 10,358 7,268 Table 2: Number of claim-evidence pairs in VITAMINC.",
"tional pairs following the synthetic process.",
"The data was partitioned as shown in Table",
"2. The assignment was done randomly by article, and is consistent with FEVER for overlapping articles.",
"Appendix A contains additional details.",
"The unique structure of VITAMINC allows us to derive annotations that provide a novel source of supervision for several fact-verification-related tasks.",
"We describe the four main tasks we consider in this work, along with baseline models: (1) factual revision flagging, (2) fact verification, (3) word-level rationales, and (4) factually consistent generation.",
"Figure 2 illustrates an example from VITAMINC.",
"We use the following notations: C is the space of short sentences that express an arbitrary factual statement that can potentially be verified or debunked by external sources.",
"S is the space of sentences that can be found in a trusted online resource (Wikipedia in this study).",
"( s t 1 , s t ) denotes the two versions of a sentence that was revised from s t 1 to s t S .",
"rel( c, s ) denotes the relation between the claim c C and observed evidence s S which can either support c ( SUP ), refute it ( REF ), or not contain enough information ( NEI ).",
"Online resources like Wikipedia are continuously changing.",
"In order to remain a reliable and neutral source for recent information, its active community of users must constantly verify and correct the revisions of others.",
"We define factual revision flagging as the task of identifying revisions that introduce a factual changee.g., by either modifying a certain fact, adding a new one, or removing an existing one.",
"Such an automated detection process can help the community moderate important articles by serving as a watchdog for factual revisions.",
"Furthermore, tracking factual revisions to certain articles can potentially help keep reliant articles consistent (e.g., citing articles, or non-English versions).",
"We pose factual revision flagging as a binary classification function f flag : S S { 0 , 1 } , where for a revision ( s t 1 , s t ) i , we set y i = 1 iff there exists a claim in C whose label ( SUP or REF ) changes as a result of the edit (i.e., SUP { REF , NEI } or REF { SUP , NEI } ).",
"Table 1 provides example factual and non-factual revisions.",
"We evaluate the following baseline models: Edit Distance.",
"We measure the edit distance between s t 1 and s t , assuming that larger edits are more likely to represent substantive changes.",
"We tune a decision threshold on the validation set.",
"BOW.",
"We use an MLP on top of a bag-of-words representation.",
"Each sentence is encoded as e , the average fast Text (Bojanowski et al., 2017) word embedding of its edited words (i.e., that were removed or modified in the revision).",
"The MLP input is then taken as [ e t 1 ; e t ; | e t e t 1 | ; e t e t 1 ] .",
"Our basic setting is similar to the inference task of the FEVER dataset.",
"8 We predict the verdict for a claim given an observed evidence, f verdict : C S { SUP , REF , NEI } .",
"The FEVER dataset, however, contains inde-pendent claim-evidence pairs.",
"In our setting, we have claims paired with revisions such that rel( c i , s t 1 ) (cid:54) = rel( c i , s t ) , creating contrastive triplets.",
"For example, the claim in Figure 2 states that the COVID-19 outbreak was identified before December.",
"VITAMINC matches it with two different contexts (before and after the presented revi-sion), that can either support or refute that claim.",
"Our baseline model is an ALBERT sentence-pair classifier that predicts rel( c, s ) .",
"Compared to BERT (Devlin et al., 2019), it uses fewer parameters by shrinking the embedding size and sharing layers, which we find to improve robustness.",
"Word-level rationales provide useful explanations for predictions of neural models (Lei et al., 2016).",
"Such explanations can be particularly useful for semi-automated fact verification, since they allow users to quickly interpret and trust the model's verdict.",
"9 In Figure 2, for example, the date of the first identified case can explain the verdict for the claim.",
"As first proposed by Lei et al. (2016), the standard definition of extractive rationales asks for selecting the minimal set of input tokens that is sufficient for preserving the model's prediction.",
"Here we use a slightly modified definition following Shah et al. (2020), where we identify the minimal set of evidence tokens where removing them 8 To focus on the inference task, as opposed to a full end-to-end system, we assume that we have access to an oracle retriever.",
"will change the input's label to NEI .",
"We pose this task as conditional masking, where we learn a function f rationale : C S { 0 , 1 } n , where n is the length of an evidence s S .",
"Given an evidence s = ( x 1 , . . . , x n ) and a claim c , where rel( c, s ) { SUP , REF } , we want to find a mask m such that rel( c, s (cid:12) m ) = NEI , where s (cid:12) m = (cid:40) x i if m [ i ] = 0; <mask> if m [ i ] = 1 .",
"Moreover, we want m to be as sparse as possible.",
"Intuitively, s (cid:12) m could be viewed as an incomplete revision in which the masked words that have not yet been filled in will determine the relation with the claim.",
"We say that m reveals the most responsible words in s for resolving c .",
"Following Shah et al. (2020), we formulate an unsupervised objective as min n (cid:88) i =1 m i s.t. rel( c, s (cid:12) m ) = NEI .",
"We evaluate the quality of m by comparing it in terms of F1 to both (1) m edit , the non-stopwords removed or replaced in the true revision (i.e., edit prediction ), and (2) m manual , a manually annotated human reference, (i.e., rationale prediction ).",
"We implement the following two baselines: Unsupervised.",
"Distantly Supervised.",
"By leveraging opposing claims present in VITAMINC, we are able to identify m edit = diff ( s t 1 , s t ) i.e., the non-stopwords that are deleted or replaced in s t 1 when compared to s t .",
"We then use m edit as distant supervision for m , where L ds = n (cid:80) ni =1 log p ( m i = m edit i ) .",
"We combine both the L us and L ds losses.",
"As facts change, the sources reporting them must change as well to reflect the most recent information.",
"In VITAMINC, this is reflected via the active revisions to Wikipedia.",
"We simulate automating this process by considering two generation tasks: Automatic Revisions.",
"f revise : S C S to produce a new context s t that minimally modifies s t 1 to agree with c .",
"For example, one can change s t 1 in Figure 2 to state before December in order to agree with the claim.",
"Claim Extraction.",
"Given a revision ( s t 1 , s t ) , we learn f extract : S S C to produce a short claim c that expresses the factual change.",
"In both tasks, the output should satisfy rel( c, s t ) = SUP , while rel( c, s t 1 ) = REF .",
"We use f verdict (4.2) to evaluate this requirement.",
"We experiment with both BART-base (Lewis et al., 2020a) and T5-base (Raffel et al., 2020) sequence-to-sequence transformer-based generators.",
"For the revision task, we concatenate s t 1 and c with a separator and train the model to predict s t .",
"For the claim extraction task, we combine the input pair ( s t 1 , s t ) into a single sentence that visualizes the revision (e.g., sales of {4.7 5.4} million ).",
"We present and analyze results for the models described in Section",
"4. Our analysis attempts to evaluate several questions: (1) How well can the current state-of-the-art models perform on the VITAMINC tasks?",
"(2) Does VITAMINC increases the robustness of models against adversarial examples?",
"(3) Can VITAMINC improve interpretability by providing supervision for anchoring words?",
"In addition to VITAMINC, we train and evaluate on several related datasets, which we briefly describe:",
"FEVER (Thorne et al., 2018): A popular fact verification dataset based on Wikipedia.",
"We use the provided SUP and REF claim-evidence pairs.",
"For NEI claims, we randomly sample neutral evidence from the article with the highest BM25 score (Fisch et al., 2021).",
"MNLI (Williams et al., 2018): A large and diverse dataset for natural language inference.",
"The three-way sentence-pair entailment prediction is similar to fact verification.",
"We use the hypothesis as the claim and the premise as the evidence and evaluate on the mismatched evaluation set.",
"Symmetric (Schuster et al., 2019): A set of challenging symmetric, synthetic extensions to FEVER's evaluation set that avoid claim-only bias.",
"Adversarial (Thorne et al., 2019c): Adversarial examples created by participants of the FEVER 2.0 shared task.",
"Teams were asked to create claims that Model Train data AUC Prec.",
"and REF claims and their gold evidence sentences.",
"Triggers (Atanasova et al., 2020): A set of 186 FEVER claims paraphrased adversarially to contain universal adversarial triggers (Wallace et al., 2019).",
"Its small size leads to high variance results.",
"ANLI (Nie et al., 2020): An adversarial dataset for MNLIand FEVER-based models.",
"The creation was performed in three iterative rounds in which a model was trained, and then crowdworkers devised adversarial inputs, and the process repeated.",
"PAWS (Zhang et al., 2019): A dataset of altered Wikipedia sentences using word swapping and back-translation.",
"Human annotators labeled whether the modified sentence is a paraphrase or not.",
"We evaluate whether a PAWS-trained classifier can be used for our factual revision flagging task.",
"Table 3 shows the results of our baseline models on the factual revision flagging task.",
"First, we notice that a model trained on the PAWS dataset (reaching 93.42 F1 score on PAWS test) does not transfer well to the flagging task, and performs on par with a simple edit distance heuristic.",
"We hypothesize that this is a result of the entity scrambling technique used to synthetically revise sentences in PAWS, which is different from the edits introduced by real, factual Wikipedia revisions in practice.",
"Second, we see that the performance of neural models trained on the VITAMINC flagging task increases with richer inputs and more advanced modelsdemonstrating the complexity of the task.",
"The ALBERT (diff) model that uses only the modified word sequences from each sentence (i.e., contextual within a subspan) improves the AUC by 10 points over a BOW model that gets a similar input.",
"The ALBERT (full) model that receives the full sentences as input (i.e., has access to even more context), further improves the AUC by 2 points.",
"Nevertheless, the best model still only reaches 83 Figure 3: Test accuracy of models trained on a dataset of 100K combined SUP and REF examples from VITAMINC and FEVER.",
"Table 4 summarizes the results for classifiers trained on fact verification and NLI datasets.",
"Verifying claims against real revisions proves to be the hardest.",
"The best model achieves 89% accuracy, lower than that on either VITAMIN C's synthetic cases or the original FEVER examples.",
"Including VITAMINC examples in the training data drastically increases models' sensitivity to contrastive examples (rightmost column)while preserving the in-domain accuracy (only 0 . 42% for FEVER and +0 . 12% for MNLI with ALBERT-xlarge).",
"Another evidence for the generalization properties conferred by VITAMINC is its zero-shot performance to both other datsets.",
"An ALBERT-xlarge model trained only on VITAMINC reaches 76% and 79% accuracy on FEVER and MNLI, respectively.",
"In contrast, the transfer accuracy for MNLI FEVER is 70% and for FEVER MNLI is only 38% .",
"Most importantly, models trained with VITAMINC perform better on challenging adversarial datasets.",
"On the otherhand, simply augmenting FEVER data with MNLI data has a limited effect on adversarial examples.",
"10 We conjecture that the contrastive nature of VITAMINC helps models better learn the relations between the claims and evidencesand to avoid relying on certain artifacts that do not generalize well.",
"To further probe the value of VITAMINC examples compared to FEVER ones ( SUP and REF 10 We've also tried augmenting FEVER with ANLI for an ALBERT-xlarge model and find it to achieve only 73% , 91% , and 34% on",
"Adver.,",
"Sym., and Triggers, respectively.",
"only), we compose training sets of 100K examples using different ratios of the two datasets.",
"As shown in Figure 3, including more VITAMINC pairs continuously improves the performance on the challenging adversarial and symmetric evaluation sets.",
"As an additional qualitative experiment, given the recent successes of huge language models such as GPT-3 (Brown et al., 2020), we explore whether such models develop sufficient context sensitivity on their own.",
"Appendix C shows the results of classifying several claims using a few-shot GPT-3 model.",
"We find that GPT-3 still largely under-performs our VITAMIN C-trained models in terms of sensitivitydemonstrating the importance of using VITAMIN C's unique structure during training.",
"Table 5 shows the results of our baseline models for identifying word-level rationales (i.e., anchoring words in the evidence).",
"While our unsupervised model is able to uncover some patterns, directly leveraging the structure of VITAMINC to obtain distant supervision for likely anchoring words (i.e., token labels) improves both the edit prediction and the word-level rationale prediction performance.",
"11 Example predictions are provided in Appendix E. 11 We evaluate rationales using a manually annotated test set of 300 examples (150 each from VitC real and VitC synthetic).",
"Table 6 presents the results on factually consistent generation.",
"We find BART to perform better in both of our generation tasks (though we only tried the default setting).",
"The BLEU score (Papineni et al., 2002) is lower in the claim extraction task since there is freedom in how to phrase the claims, which can result in greater differences between the outputs and the references.",
"The BERT-based BLEURT score (Sellam et al., 2020) shows a similar trend.",
"Still, the claim extraction model succeeds in updating the facts that reflect the true revision 86% of the time, as measured by the fact verification model's verdict ( f verdict ).",
"The revision generator aims to modify sentences so that they agree with a given claim.",
"According to our fact verification model's verdict, it succeeds in doing so 76% of the time.",
"Furthermore, revisions should resemble real ones, and preserve the remaining content that is unrelated to the claim.",
"The SARI KEEP F1 (Xu et al., 2016) of 75 shows that the model and the reference mostly agree on parts of the sentence that should be kept unchanged.",
"We find that the token-based measures and our f verdict metric agree well with human (manual) evaluation scores.",
"We randomly sampled 100 generated and human-written sentences per task, and asked workers on Amazon MTurk to rate their grammaticality and whether the evidence s t supports the claim.",
"The scores of the generated sentences were on par with the human-written ones, indicating the high-quality of our outputs.",
"Table 7 presents two example generations for the claim extraction task (we provide additional qualitative examples in Appendix E).",
"Our model is able to efficiently extract a self-contained claim that expresses the correct fact after the edit.",
"As in 5.3, Target Model SARI scores Manual evaluation ROUGE 2 BLEU KEEP ADD DEL AVG BLEURT f verdict Grammar SUP Revision T5 77.63 47.46 72.61 13.32 43.04 42.99 0.38 64.52 81.00 71.80 BART 85.23 54.86 75.36 18.31 47.95 47.21 0.67 76.26 84.80 83.20 Claim T5 35.19 13.95 44.36 20.59 87.54 50.83 -0.12 75.39 71.33 72.22 BART 40.38 16.14 52.91 23.62 91.37 55.97 0.16 85.83 75.78 74.22 Table 6: Factually consistent generation results.",
"we also explore how GPT-3 handles this task (we provide two demonstrations in the prompt).",
"Compared to the BART model trained on VITAMINC, GPT-3 appears to make more factually inconsistent or unsupported generations (see Appendix C for more details).",
"Encouragingly, our f verdict classifier is still able to pick up on thisas demonstrated by the predictions in the rightmost column of Table 7.",
"For example, classifying the report about 20 deaths as NEI since it is not part of the source.",
"Once again, this serves to qualitatively demonstrate the effectiveness of leveraging VITAMINC.",
"We presented VITAMINC, a large-scale dataset for training and evaluating fact verification models using contrastive contexts.",
"Our novel method of leveraging factual revisions to Wikipedia enabled us to create challenging examples in which a claim is paired with contexts that are lexically similar, yet factually opposing.",
"Our results illustrated that training on VITAMINC improves classifier sensitivity to subtle changes in evidence, and increases their robustness to adversarial examples.",
"Furthermore, we formulated several new, important tasks for fact verification that VITAMINC allows us to test.",
"We showed how the dataset's unique before and after structure lends itself to training classifiers to flag factual revisions.",
"In addition, for factual revisions, the edits reveal which words in the evidence are the most criticalwhich helps supervise word-level rationale models for better interpretability.",
"Finally, we demonstrated that VITAMINC can help with factually consistent text generation.",
"We hope that this work and the range of tasks it presents will motivate and support the fact verification field in developing reliable models that can adapt to dynamically changing evidence.",
"We thank the TransPerfect team, Darsh J Shah and Enrico Santus for helpful discussions, as well as the members of the MIT NLP group and Andreas Vlachos for valuable feedback.",
"This work is supported in part by the Facebook Online Safety Benchmark Award.",
"TS is supported in part by DSO grant DSOCL18002.",
"AF is supported in part by a NSF Graduate Research Fellowship."
] | [
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"method",
"objective",
"objective",
"abstain",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"objective",
"result",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Word embeddings obtained from neural network models such as Word2Vec Skipgram have become popular representations of word meaning and have been evaluated on a variety of word similarity and relatedness norm-ing data.",
"Skipgram generates a set of word and context embeddings, the latter typically discarded after training.",
"We demonstrate the usefulness of context embeddings in predicting asymmetric association between words from a recently published dataset of production norms (Jouravlev and McRae, 2016).",
"Our findings suggest that humans respond with words closer to the cue within the context embedding space (rather than the word embedding space), when asked to generate thematically related words.",
"Modern distributional semantic models such as Word2Vec (Mikolov et al., 2013a,b) and GloVe (Pennington et al., 2014) have been evaluated on a variety of word similarity and relatedness datasets.",
"A considerable amount of attention has been paid to what models and, more recently, what parameter settings and input data produce embedding representations that better reflect sim-ilarity/relatedness between words, taking human normative judgments as the gold standard (Baroni et al., 2014; Kiela et al., 2015; Levy et al., 2015; Melamud et al., 2016; Sahlgren and Lenci, 2016).",
"Similarity between two words is often assumed to be a direction-less measure (e.g., car and truck are similar due to feature overlap), whereas relatedness is inherently directional (e.g., broom and floor share a functional relationship).",
"In addition, it is well established in human behavioral data that similarity and relatedness judgments are both asymmetric.",
"For example, humans judge leopard to be much more similar to tiger than tiger is to leopard (Tversky and Gati, 1982).",
"A concordant asymmetry is seen in relation tasks: in free association data, baby is a much more likely response when cued with stork than stork would be as a response when cued with baby (Nelson et al., 1999).",
"The distinction between similarity and relatedness, and the asymmetry of the judgments have typically been ignored in recent evaluations of popular embedding models.",
"There is ample experimental evidence in the psycholinguistic literature that similarity and relatedness are both well represented in human behavior (see Hutchison (2003), for a review), and are qualitatively distinct representations or processes.",
"In semantic priming paradigms, a target word is processed more efficiently when briefly preceded by a related or similar word (e.g., honey-bee or wasp-bee ) relative to a neutral or unrelated prime (e.g., chair-bee).",
"Facilitation is seen for word pairs that are purely category coordinates ( lawyer-surgeon ) or purely associates ( scalpel-surgeon ), and pairs that share both types of relations ( nurse-surgeon ) tend to see an additive processing ben-efit that reflects the privilege of both similarity and relatedness, an effect generally referred to as the associative boost (Chiarello et al., 1990; Lucas, 2000).",
"Asymmetries are the norm in semantic priming data, leading to the early theoretical prominence of spreading activation models to account for human data.",
"Free association data provide complimentary evidence of the qualitative distinction between relatedness and similarity in human memory.",
"In a free association task, participants are provided with a cue word and are asked to rapidly respond with a word that comes to mind first.",
"Huge norms of human responses have been collected over the years; for example, (Nelson et al., 1999) early norms contain three-quarters of a million responses to over 5,000 cue words across 6,000 par-675 ticipants.",
"More recently, (De Deyne et al., 2016) have more than doubled the size of Nelsons norms in multiple languages by gamifying the task 1 .",
"The majority of responses in free association data are based on thematic relatedness rather than similarity per se (De Deyne and Storms, 2008).",
"As with semantic priming, free association norms are dominated by asymmetric relations: While stork has a very high probability of eliciting baby as a response across participants, cuing with baby brings so many competitors to mind that it is extremely unlikely to respond with stork (Hutchison, 2003).",
"The difficulty of accounting for similarity and relatedness with a single vector representation for each word has led to the suggestion that distinct representations, and perhaps even distinct learning models, are needed for optimal performance on these distinct tasks (Mandera et al., 2017).",
"It may be unrealistic to expect a single vector representation to account for qualitatively distinct similarity and relatedness data.",
"Further, asymmetries in human similarity and relatedness tasks have been used as strong evidence against spatial models of semantics such as word embedding models, and in favor of Bayesian models (Griffiths et al., 2007); but see (Jones et al., 2017).",
"The cosine between two word vectors is inherently symmetric: leopard-tiger has the same cosine as tiger-leopard .",
"In order to understand how distributional representation of words reflect similarity and relatedness one should study the algorithms.",
"Each cell of a word vector in a count model indicates the first-order association between the target word and a context word, document, or topic.",
"Dimensionality reduction algorithms are applied to obtain denser representations that can demonstrate second-order relatedness/similarity between words (e.g. applying SVD to PMI matrix).",
"Relative to these classic models, predictive distributional models such as Word2Vec are generally more complicated.",
"Decomposition and interpretation of the neural word embeddings is less straightforward because the final vectors incrementally converge from a predict-and-update process based on a local objective function rather than by global counting or a batch abstraction process.",
"Most evaluative studies of predictive distributional semantics have viewed these models as a black box, considering only at the output vectors.",
"For example, the Word2Vec Skipgram architecture has easily taken 1 https://smallworldofwords.org the lead and become representative of the predictive distributional semantic models, but little attention has been paid to what statistical information is best represented in the two resulting embedding sets.",
"The Skipgram is a feed-forward network with localist input and output layers, and one hidden layer which determines the dimensionality of the final vectors.",
"It is trained on word-context pairs with an objective function trying to minimize the error of predicting context words within a specific window around the center word.",
"At the end of training, two matrices are produced, one representing word embeddings and the other representing context embeddings for each and every vocabulary word.",
"While word embeddings have been used as the output of Skipgram in many previous studies, little attention has been paid to the context embeddings and the usefulness of these vectors in performing lexical semantic tasks (Levy et al., 2015; Melamud et al., 2015; Aoki et al., 2017).",
"Recently, Asr and Jones (2017) used an artificial language to evaluate how hyperparameter settings affected the Skipgrams representation of firstvs. second-order statistical sources.",
"In natural languages, paradigmatic and syntagmatic information sources are non-independent, confounding similarity and relatedness judgments.",
"Words that are more similar tend to also share functional, script, or thematic relations (Hutchison, 2003; Lucas, 2000); e.g., surgeon-nurse .",
"Asr and Jones artificial language was engineered to disentangle the two sources of statistical information.",
"Following on suggestions by Levy et al. (2015), Asr and Jones found that averaging context vectors with the word vectors (w+c post-processing) produced optimal organization of the semantic space for both paradigmatic and syntagmatic structure.",
"The goal of the current work is to more systematically explore the integration of word and context vectors in similarity and relatedness data; our two core objectives are:",
"1. To evaluate the Skipgram model on thematic relatedness production norms, which implicitly manifests asymmetric relations between words compared to the typical evaluation on direction-less similarity/relatedness.",
"2. To explore novel ways of computing relatedness scores by contributing both word and context embeddings produced by Word2Vecs Skipgram architecture.",
"One of the famous datasets on word similar-ity/relatedness is Wordsim353 (Finkelstein et al., 2001) including 353 English word pairs, and a revised version (Agirre et al., 2009) splitting similar from related word pairs (WrordSim).",
"These data have been repeatedly used in comparative studies on distributional semantic models.",
"Recently, the division between similarity and relatedness judgments has been highlighted in the literature, resulting in development of new datasets with more specific annotation instructions.",
"Hill et al. (2015) introduced the SimLex-999 dataset (SimLex) for purified evaluation of word similarity by asking the annotators explicitly not to score based on degree of relatedness.",
"For example, the word pair coast-shore received an average similarity score of 9.00 in SimLex and 9.10 in WordSim, while the related word pair clothes-closet was assigned an average score of 1.96 in SimLex and 8.00 in WordSim.",
"More recently, Jouravlev and McRae (2016) collected pure relatedness data through a production experiment.",
"They presented participants with cue words and instructed them to response only with directly related words and not taxonomically similar words.",
"This database (ProNorm) includes responses to 100 object words, providing us with directional relatedness score for 1,169 word pairs.",
"The important distinction of SimLex and ProNorm datasets compared to other available similarity/relatedness data is the explicit instruction of participants to pay attention to one aspect of word relations and not the other.",
"The ProNorm dataset, also has an advantage of a more natural setup, where associatively related words were generated by participants, rather than being selected by language experts and only rated by the participants.",
"In this paper, we use ProNorm as the main dataset to investigate how word embedings should be used to measure relatedness between two words and how the free recall experiment can be simulated for the model.",
"The SimLex dataset is used to set a baseline for comparison against the similarity measurement task, which is the most common intrinsic benchmark for evaluation of word embeddings.",
"Finally, we use the WordSim dataset to explore whether the observed differences between vector-based measures of similarity and relatedness come out if the benchmark data is collected in implicit setup, where participants did not know they were rating for similarity or relatedness.",
"Word embeddings produced by the Skipgram architecture have been used in many previous studies as the word meaning representation and are the main output of the model.",
"In the original implementation of Word2Vec, the context embeddings (weights on the hidden to output layer of the neural network) were discarded after learning was complete.",
"Inspired by Pennington et al. (2014) in the architecture of the GloVe model, Levy et al. (2015) proposed that the final word embeddings in Word2Vec could be obtained from the average of word and context embeddings.",
"They implemented word + context (w+c) as a useful post processing option for the Word2Vec Skipgram algorithm in their published version of the model 2 .",
"The w+c option allows computation of word similarity based upon both first and second-order co-occurrence information.",
"The cosine similarity between two words based on the dot product of their w+c embeddings, which we call the AA measure ( A standing for the average of word and context embeddings of a word), includes the following terms: cos ( a, b ) = W a W b + C a C b + W a C b + C a W a 2 W a C a + 1 W b C b + 1 (1) While traditional measures, i.e., WW (cosine similarity of the word embeddings), and AA (co-sine similarity of the word+context embeddings) are suitable predictors for words similarity, we hypothesize that the asymmetric measures WC (word embedding of the first word and context embedding of the second) and CW (context embedding of the first word and word embedding of the second) should be better indicators of relatedness.",
"This decomposition of similarity measures is especially useful when asymmetric associations between words are being inferred: the asymmetric measures reserve the direction and the type of relation: WC reflects the likelihood of the second word occurring in the context of the first word, and CW reflects the likelihood of the first word occurring in the context of the second word.",
"These two quantities are different, given that the W and C matrices are obtained from two different layers 2 https://bitbucket.org/omerlevy/ hyperwords 677 of the neural network, one connected to the input layer and the other to the output layer.",
"SimLex and ProNorm provide complementary scores on similarity and relatedness between words.",
"In order to demonstrate and examine how word embeddings should be used in asymmetric relatedness measurement, we designed two experiments.",
"In both experiments word and context embeddings were obtained from Skipgram models trained on a tokenized English Wikipedia dump 3 .",
"We slightly modified the original Word2Vec Skipgram implementation by Levy et al. (2015) to save both word and context vectors.",
"We tested vector spaces with varying dimen-sionalities ( dim =100/200/300) and number of context words ( win =3/6/10), as well as minimum occurrence cutoff ( min =1/5), negative samples ( neg =1/5) and iterations ( iter =1/5).",
"These variations were tested to ensure the observed patterns reported in the experiments, but we report numerical results only for best performing models.",
"In particular, higher dimensional vectors with dim =300 produced consistently better alignment with human scoring data.",
"We also found min =1, neg =5 and iter =5 to be the optimal parameter settings across all experiments.",
"Our first experiment follows an established evaluation strategy by computing the Spearman correlation coefficient between the set of similarity measures produced by the word embedding model (WW/CC/WC/CW/AA) and the similar-ity/relatedness scores taken from the SimLex and ProNorm datasets.",
"As ProNorm score of a word pair ( w 1 , w 2 ) , we simply use the total number of times a response word w 2 was produced by all subjects given w 1 as a cue word.",
"Interested readers are encouraged to see Jouravlev and McRae (2016) for more details on the data collection procedure.",
"Our hypothesis is that for taxonomic similarity judgment the classic WW measure, i.e., the cosine of the word vectors of w 1 and w 2 would perform best, especially given the fact that in collection of similarity norms the direction between two words was not a factor.",
"For explicit relatedness judgment, on the other hand, we expect one of 3 https://sites.google.com/site/rmyeid/ projects/polyglot the asymmetric measures to be the best predictor.",
"WC, which is the cosine between the word embedding of the cue w 1 and the context embedding of the response w 2 tells us how likely we would see w 2 and similar words in the context of w 1 .",
"CW reflects the opposite way relatedness, meaning how likely it is to see w 1 and similar words in the context of w 2 .",
"Note that these two quantities are different both mathematically and conceptually, because they are obtained from generalization over word occurrences in many different contexts.",
"We hypothesize that WC should be the best predictor for the ProNorm score of ( w 1 , w 2 ) given that production in the constrained setup of the ProNorm experiment was guided by thematic relatedness, making it more like a non-syntactic language modeling task: guessing which other words/concepts might appear within the context of the current word.",
"SimLex and ProNorm collections have almost the same number of word pairs.",
"However, it is important to note that ordering ProNorm word pairs based on their relatedness scores is probably more difficult than ordering the SimLex list of word pairs.",
"This is because in the ProNorm data collection setup, all word pairs were basically generated based on relatedness, whereas in SimLex, experimental items were pre-designed in a way they covered a wide range of closely similar to totally different word pairs.",
"Ordering SimLex should in turn be harder than ordering words in the old WordSim353 similar and related word pair collections, because each of the latter subsets has a much smaller number of items compared to SimLex collection.",
"In order to demonstrate the difference between the tasks of ordering words based on similarity vs. relatedness in an explicit setup (SimLex and ProNorm) with an implicit, i.e., a mixed setup we include WordSim353 (Agirre et al., 2009) in our experiment.",
"We hypothesize that the patterns of superiority of one vector-based measure to another in ranking word pairs based on their similarity and relatedness should come out even if people were not explicitly instructed to pay attention to a specific aspect.",
"Table 1 displays correlation scores between similarity ratings in SimLex and Skipgram similarity measures introduced in the previous section (all",
"significant at p < 0 .",
"001 ).",
"Results on models with dim =300 and win =3/6/10 are reported (see Appendix for supplementary results).",
"The WW measure exhibits consistently a better alignment with the human rating data compared to the all other measure.",
"This suggests that second-order co-occurrence information plays the main role in similarity between two words.",
"In collection of SimLex, subjects were asked explicitly not to rate similarity based on thematic relatedness.",
"It is likely that the human ratings were affected not only by co-occurrence information encoded in word embeddings but also in context embeddings.",
"As we expected, the best predictors of this data are the symmetric similarity measures, and in particular, WW.",
"The last row of the table includes Spearman correlation between human similarity judgment and a linear regression model using all Skipgram measures as predictors.",
"Thus, numbers in this row show an upper bound for Spearman scores of the individual measures (obtained from an optimal weighting of all individual measures).",
"Table 2 shows the Spearman correlation between ProNorm scores and the Skipgram measures (all significant at p < 0 . 001 ).",
"As we hypothesized, WC stands out as the best predictor, suggesting that human responses to a cue word (when asked to name related words) are more likely to be found in the vicinity of the cue word within the context embedding space rather than within the word embedding space.",
"The correlation between the ProNorm scores with WC is larger than with WW or AA scores.",
"This indicates the importance of the knowledge encoded in the context embeddings, but specifically the prediction power of the asymmetric similarity measure compared to the symmetric ones.",
"Interestingly, CW is not as good as WC in this task.",
"This reveals the importance of the direction in associative relatedness between words such as baby and stork, which seems to correlate with their vector representations.",
"Finally, the regression model, which applies an optimal weighing on different Skipgram measures finds the best fit, whereas AA which gives equal weights to symmetric and asymmetric measures fails to compete with WC alone.",
"Comparisons between Tables 1 and 2 suggest that, similarity and relatedness are best approximated by symmetric and asymmetric measures, respectively.",
"We next examined the WordSim353 data to evaluate whether above implications apply also to ratings collected in implicit setup, i.e., where human subjects were not instructed to response based either on taxonomic similarity or associative relatedness.",
"We examine each subset of WordSim353 separately and treat them like similarity and relatedness data.",
"Table 3 shows results on these two collections of word pairs with best parameter setup; i.e., with dim =300 and win =3 and 6 4 .",
"Similar to our previous experiments on the other datasets, relative ranking of similar word pairs is best predicted with commonly used measure WW alone, which is indicative of second-order co-occurrence similarity.",
"For related word pairs, asymmetric measures WC and CW, which are indicative of first-level co-occurrence come out as better individual predictors compared to WW.",
"However, the balanced combination of all, i.e., the AA measure seems to be the consistent winner across both datasets.",
"This finding suggests that when similarity/relatedness is scored by people as an overall degree of closeness between words and without explicit instruction to focus on one aspect, the most reliable predictor would be a cosine measure that considers both symmetric and asymmetric types of relations between words.",
"4 Results for win =10 were not as good as in other conditions for this experiment, therefore we only report the very best setups with win =3 and 6.",
"Our first experiment focused on discovering the best vector-based predictor for similarity and relatedness between two words.",
"We found that considering context vectors in calculation of the similarity score produces a superior predictor, specially for relatedness, compared to the traditionally used measure (WW) based only on word vectors.",
"The experiment in this section is a more tangible evaluation of the Word2Vec model in a relatedness task when a cue word is given.",
"The aim is to simulate the production experiment with which the ProNorm data were collected and to evaluate whether using the WC measure will give us more true responses than WW.",
"For the purpose of this experiment, we use the Skipgram model with dim =300 and win =10 as these settings produced the best overall performance in the quantitative experiment on ProNorm data.",
"The simulation procedure is as follows: For each cue word w 1 in the ProNorm dataset, each model generates the n most similar words in the vocabulary and we count how many of the human responses were contained in each set.",
"The first model looks up nearest neighbors of w 1 within the word space (thus using WW as the proximity measure) and the second model searches for the nearest neighbors of w 1 within the Context space (thus using WC as the proximity measure).",
"Variable n indicates the total number of guesses a model is allowed to make when responding to a given cue word.",
"In other words, n is the size of the subspace explored around the cue word within each distributional semantic space.",
"Since our previous experiment showed a higher correlation between WC and the relatedness norms, we expect that neighboring words within the context embedding space (in the vicinity of the cues word embedding) Figure 1: Number of human responses found in word and context embedding spaces near the word embedding of the cue (x-axis) as the search space is increased (y-axis).",
"should be more populated with related words (i.e., human responses) compared to neighboring words within the word embedding space.",
"Regarding the above procedure, we first extract the word embedding of the cue w 1 and then consider all human responses for that cue, i.e. w 2 of all existing pairs ( w 1 , w 2 ) in the dataset, within both the word and context embedding spaces.",
"If, as results of the previous experiment suggest, WC is a better measure of forward relatedness, then a larger portion of human responses should be found in neighboring words within the context space than within the word space surrounding the cue word.",
"Our distributional spaces are constructed based on Wikipedia text; therefore, the model vocabulary is very large and noisy.",
"While the top-rank guesses of the model (both measures) are indeed similar/related to the cue words, a lot of them are more frequent in the training corpus genre, i.e. Wikipedia language, than in the simpler language humans (e.g., subjects of the ProNorm study) use when recalling direct relations.",
"For example, in response to the cue word restaurant subjects of the ProNorm study generated words such as plate , food , menu , drink , and chef .",
"In addition to correct guesses, both WW and WC models trained on web corpora generated words such as bistro , eatery , hotel , grill and buffet as closest words to restaurant .",
"Another example would be the cue word house , which in the ProNorm experiment triggered door , family , bricks , bed , window , roof , furniture , fireplace , chimney , and kitchen .",
"The WW model generated the following words as top candidates, which are in fact taxonomically sim-680 ilar, to house : mansion , farmhouse , and cottage .",
"WC model generates relatively more thematically related words, some of which are correct guesses (overlapping with human data) and some are not: barn , residence , estate , dining , room , stables , fireplace , family , and kitchen .",
"On average, only one human response per cue can be found in the top 30 model responses.",
"Blue and candy bars in Figure 6 show the total number of correct guesses by the model using WW and WC measures, respectively.",
"This quantity is the total correct guesses for all 100 cues in the ProNorm dataset (x-axis), when the n most similar neighboring words are examined in each space (y-axis).",
"We explored n values between 10 and 100.",
"Table 4 shows an example of our simulation for the word car .",
"As the search space widens up to 100 most similar words in the vicinity of the cue word embedding, more overlap is observed between human responses and model responses.",
"In addition to synonymous words such as automobile , the majority of incorrect guesses for the cue word car are names of automobile models such as suv and bmw .",
"The W space around the cue word embedding is more populated with such taxonomically similar words compared to the C space around the cue word.",
"On the other hand, as the results suggest, thematically related words such as driver and steering wheel can more easily be found within the surrounding C space.",
"This pattern is very consistent across all the cue words in the ProNorm dataset, suggesting that WC is a more valid measure of forward thematic relatedness.",
"This qualitative observation suggests that the differences between the Spearman correlations in Table 2 were meaningful, and vector-based measures of similarity and relatedness, i.e., WW and WC, return different sets of neighboring words to a given cue word.",
"Word embeddings learned from unlabeled text using different models such as Word2Vec and Glove are currently being used for representation of input to deep neural networks that carry out a variety of NLP tasks.",
"Word similarity/relatedness datasets have been the basis for intrinsic evaluation of word embeddings.",
"These datasets provide researchers with insights about how word relations are demonstrated in a distributional space.",
"Previous work has employed WordSim353, SimLex999 and several n Correct guesses by each measure 20 WW tires WC tires | driver 50 WW tires WC tires | driver | driving 100 WW tires WC tires | driver | driving | steering wheel Table 4: Human responses for the cue word car found in topn neighboring words within the word and context embedding spaces using WW and WC measures.",
"other established similarity/relatedness datasets for evaluation of word embeddings (Baroni et al., 2014; Kiela et al., 2015; Levy et al., 2015; Melamud et al., 2016; Sahlgren and Lenci, 2016).",
"A closely related previous study to the current study is the comprehensive evaluation of Word2Vec and three other distributional semantic models by Levy et al. (2015), where they demonstrated that all the models could learn word relations to similar extent if hyper-parameters were carefully tuned.",
"In particular, Levy et al. discussed the effect of averaging word and context vectors on capturing first and second-order similarity.",
"However, the w+c option did not make it to their result tables because it was not selected as one of the generally optimal settings, while mentioned to be useful to test.",
"Asr and Jones (2017) looked more closely into this optional parameter setting in their study of count-based vs. predictive distributional semantic models (Word2Vec Skipgram vs. PPMISVD).",
"Using an artificial language framework, they showed that considering the w+c option would extend the range of word-to-word cosine similarity scores, and directly affect the topology of word clusters in the distributional space.",
"However, none of the mentioned works studied the individual terms in the cosine similarity obtained from Word2Vec Skipgram when the w+c option is used, thus they left the question of using these terms for replicating psycholinguistic data on asymmetric association open.",
"Another related line of research in NLP is work on retrofitting of word embeddings using additional lexical resources to reflect specific relations between words more strongly (Faruqui et al., 2015; Kiela et al., 2015).",
"Kiela et al. (2015) looked into the particular case of similarity and relatedness.",
"They pro-681 posed using Thesaurus synonymy data and free-association data in training of the word embeddings to obtain vectors suitable for similarity and relatedness, respectively.",
"In contrast to this category of work though, the objective of our research is elaborating the functionality of the word embedding algorithms and how their general-purpose output should be interpreted and queried rather than trying to maximize the performance of the model on a given task by modifying training data or the training mechanism.",
"Our study adds to the existing body of research by employing word relatedness data collected within a standard psychology experiment and showing how firstvs. second-order information accumulated on the two layers of the popular Skipgram model can be used for different tasks.",
"We showed that the distributional measure for capturing asymmetric relatedness between two words is different from a measure that captures taxonomic similarity even though both types of information are obtained from a unified model trained on a single source of co-occurrence data.",
"Word and context embeddings produced by Word2Vec Skipgram are two different semantic representations of the vocabulary words within the same Euclidean space.",
"We proposed several measures for complementary similarity and relatedness judgments computed based on these embeddings.",
"Asymmetric measures obtained from the inner product of a vector from the word embedding space and a vector from the context embedding space are representative of first-order thematic relations between words.",
"We examined our proposal using a recently published dataset of production norms (Jouravlev and McRae, 2016) and confirmed when people were explicitly asked to recall thematically related words, their responses were more likely located within the context embedding space in the vicinity of the cues word embedding.",
"In other words, WC, where W is the word embedding of the cue and C is the context embedding of the response, best measures forward thematic relatedness.",
"We also ran experiments on pure similarity judgment by employing a commonly used dataset of word pairs scored according to taxonomic similarity rather than other types of relations (Hill et al., 2015).",
"Human judgments on word similarity taken from this data were best predicted by a symmetric measure, the classic WW cosine similarity between the word vectors.",
"This suggests that the best measures of taxonomic similarity and thematic relatedness are different in distributional space, even though information involved in both measurements is collected from the same set of co-occurrence features.",
"Based on the observations made in the paper, we can also argue that the free recall task in the constraint manner where people are asked to name related words (such as in Jouravlev and McRae's study) is similar to the task of predicting context words for the given cue word.",
"This is an important finding for the psycholinguitic research trying to study the mechanisms in lexical production tasks.",
"For NLP research, these findings motivate taking different approaches in problems where thematic relations between words is important for the task, e.g., in assessment of text coherence, question answering, or language generation.",
"Finally, our experiments elaborated the functionality of the two transformation matrices in Word2Vec architecture.",
"We repeated some of our experiments with GloVe, another popular word embedding model with two final sets of (word/context) vectors.",
"We found similar patterns of relative goodness of measures: WW was consistently better in scoring similarity between two words and WC was better in measuring the thematic relatedness.",
"However, the asymmetry between WC and CW did not come out clearly in these experiments and the overall performance of the GloVe model in the similarity task was much lower than Skipgram.",
"A closer investigation of the GloVe model architecture will be necessary for argumentation about its different results (B Appendix includes results of our preliminary experiments with GloVe).",
"Other vector space models obtained from non-neural architectures can also be examined in this framework.",
"For example, Levy et al. (2015) showed that the w+c option (us-ing the average of word and context embeddings as word vectors) could be simulated in a count-based model that applies SVD to the PMI matrix of word-context co-occurrences.",
"Examining these models on similarity vs. relatedness using our proposed measures will be left for the future.",
"5 5 Code for running all experiments using Word2Vec and GloVe models is available at https://github.com/ FTAsr/wordvet 682 Acknowledgments We are thankful to our reviewers for their helpful feedback on the initial version of the paper and suggestions for extension of the work.",
"This research was funded by grant R305A140382 from the Institute of Education Sciences, USA.",
"Supplementary results on SimLex and ProNorm datasets using Skipgram with dim =100 & 200 are presented here.",
"Patterns of how WW and CW measures predict similarity and relatedness are consistently repeated in these parameter settings.",
"Supplementary result on SimLex and ProNorm datasets using GloVe models with dim =300 and win =3/6/10 are presented in this section.",
"GloVe had a general disadvantage in learning word similarity (SimLex) compared to Skipgram.",
"Patterns of how WW and CW measures predict similarity and relatedness are nevertheless similar across models: WC is much better than WW for relatedness prediction."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"result",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Recent work in neural generation has attracted significant interest in controlling the form of text, such as style, persona, and politeness.",
"However, there has been less work on controlling neural text generation for content.",
"This paper introduces the notion of Content Transfer for long-form text generation, where the task is to generate a next sentence in a document that both fits its context and is grounded in a content-rich external textual source such as a news story.",
"Our experiments on Wikipedia data show significant improvements against competitive baselines.",
"As another contribution of this paper, we release a benchmark dataset of 640k Wikipedia referenced sentences paired with the source articles to encourage exploration of this new task.",
"Recent work in neural natural language generation (NLG) has witnessed a growing interest in controlling text for various form-related and linguistic properties, such as style (Ficler and Goldberg, 2017), affect (Ghosh et al., 2017), politeness (Sennrich et al., 2016), persona (Li et al., 2016b) voice (Yamagishi et al., 2016), grammatical correctness (Ji et al., 2017), and length (Kikuchi et al., 2016).",
"This trend offers the promise of empowering existing authoring tools such as Grammarly, Google Smart Compose, and Microsoft Word with the ability to control a much greater variety of textual properties, which are currently mostly limited to grammar, spelling, word choice, and wordiness.",
"What has been relatively less explored in neural NLG research is the ability to control the generation of a current sentence not only in its form , but also its content .",
"1 Consider for example Fig. 1, which illustrates a situation where an author edits a document (here a Wikipedia article), 1 Historically, NLG has focused on generation from structured content such as a database or semantic representation, but this paper is interested in generation from free-form text.",
"and the goal is to generate or suggest a next sentence (shown in orange) to the author.",
"This type of unconstrained, long-form text generation task (Mostafazadeh et al., 2016; Fan et al., 2018) is of course extremely difficult.",
"Free-form generation can easily go astray due to two opposing factors.",
"On one hand, ensuring that the generated output is of relatively good quality often comes at the cost of making it bland and devoid of factual content (Li et al., 2016a).",
"On the other hand, existing techniques can help steer neural models away from blandness in order to produce more contentful outputs (using temperature sampling (Fan et al., 2018), GAN (Goodfellow et al., 2014),",
"etc.), but often at the cost of hallucinating (Wiseman et al., 2017) words or concepts that are totally irrelevant.",
"Neither situation provides a compelling experience to the user.",
"What is clearly missing from the aforementioned authoring scenario is the notion of grounding : there is often a profusion of online resources that bear at least some relevance to any given document currently being written.",
"Much of the general-purpose world knowledge is available in the form of encyclopedias (e.g., Wikipedia), books (e.g., Project Gutenberg, Google Books), and news articles.",
"While the generation of good quality texts without any conditioning on exter-nal sources (Fan et al., 2018) might be an interesting research endeavor on its own, we argue that grounding can make the generation task much easier, e.g., as shown in Fig. 1 where a passage of a news article (green) can be reformulated considering the current context of the document (yellow) in order to produce a natural next sentence (or-ange).",
"In light of this desideratum, this paper addresses the problem of grounded text generation, where the goal is to infuse the content or knowledge from an external source (e.g., a news article as in Fig.",
"1) in order to generate a follow-up sentence of an existing document.",
"We see this as a form of Content Transfer , as other characteristics of the external sourcesuch as style and linguistic formare not controlled.",
"In addition to formulating this new task, our work makes the following contributions: We provide a large dataset of 640k instances that contain parallel data of a source document (news articles), a context, and sentence to be produced.",
"The latter two are extracted from Wikipedia, which is an attractive dataset for grounded generation as many of the statements in Wikipedia cite external sources (i.e., grounded in an external article).",
"Finally, we also provide simple yet efficient models that condition both on the external article and the context of the current document.",
"We compare our models against extractive and abstractive baselines, including summarization methods that simply try to condense the external article without considering the context of the document.",
"Our experiments show that our models which incorporate the context gain 7.0 ROUGE-L F1 points in other words, treating our task as a summarization problem is not enough.",
"Our human evaluations also show that models that are aware of the context generate relevant and fluent sentences that are coherent to the context.",
"This research is concerned with the general problem of grounded authorship assistance, i.e., the task of suggesting text to insert or append in an existing document draft, in such a way that all the added content reflects information from external sources, such as news articles and books.",
"This type of grounded generation task could take many forms, so we decided to formalize the task as follows, while still keeping the task both challenging and practically interesting.",
"Given an external document (green in Fig. 1), and some existing curated text (yellow), the task is to generate a single update sentence (orange).",
"This update sentence should be both relevant to the context and reflec-tive of the information contained in the document.",
"This task bears some similarity with automatic summarization (Nenkova and McKeown, 2011), as a nave approach to the above problem is to append a one-sentence summary of the document to the curated text.",
"While indeed related, the two tasks differ in two key points.",
"First, the one-sentence summary must be contextually appropriate given the previous context of the curated text.",
"Second, summarization is mostly concerned with finding salient information, butin the case of our taskinformation relevant to the context might actually only be auxiliary within the external document.",
"Section 6 (Related Work) further contrasts our task with summarization.",
"Formally we define our task as follows: given an existing curated text s and a document d describing novel information relevant to that text, the system must produce a revised text s (cid:48) that incorporates the most salient information from d .",
"We restrict our focus to the cases where the revised text s (cid:48) can be obtained by appending the new information from d to the original curated text s .",
"2 In particular, we assume we can transform the old curated text s into the new text s (cid:48) by appending one additional update sentence x to s .",
"2 In general, updated information from d might demand substantial changes to s : perhaps core assumptions of s were contradicted, necessitating many removed and rewritten sentences.",
"We postpone this complex setting to future work.",
"This paper operates in a conventional supervised learning setting.",
"For training data, we rely on a large dataset of existing curated text S = { s 1 , . . . , s n } , corresponding documents with novel information D = { d 1 , . . . , d n } , and the update sentences X = { x 1 , . . . , x n } .",
"Our task is to generate the update sentence x i that could be appended to the curated text s i in order to incorporate the additional information from document d i .",
"The goal would be to identify new information (in particular, d i \\ s i ) that is most salient to the topic or focus of the text, then generate a single sentence that represents this information.",
"A natural though difficult means of generating this additional update sentence x is to use a generative model conditioned on the information in the curated text s and the new document d .",
"Recent methods inspired by successful neural machine translation systems have produced impressive results in abstractive summarization (Nalla-pati et al., 2016).",
"Hence, our first step is to use the sequence-to-sequence encoder-decoder model (Bahdanau et al., 2015) with attention (Luong et al., 2015) for our task.",
"This kind of model assumes that the output sentence can be generated word-by-word.",
"Each output word x ti generated is conditioned on all prior words x <ti and an encoded representation of the context z : (cid:89) t p ( x ti | x <ti , z ) (1) Context Agnostic Generative (CAG) Model: One simple baseline is to train a sequence-to-sequence model for the document d alone that does not directly incorporate information from the curated text s .",
"Here, the algorithm is trained to generate the most likely update sentence x = arg max p ( x | d ) .",
"In this setting, we consider the reference document d i as the source and the update sentence to be generated x i as the target.",
"The encoder and decoder do not directly see the information from the curated text s , but the update x inherently carries some information about it.",
"The parameters of the model are learned from updates that were authored given the knowledge of the curated text.",
"Hence, the model may capture some generalizations about the kinds of information and locations in d that are most likely to contribute novel information to s .",
"Context Only Generative (COG) Model: This algorithm is trained to generate the most likely update sentence x = arg max p ( x | s ) .",
"This model is similar to CAG except that we consider the curated s i as the source.",
"In this setting, there is no grounding of the content to be generated.",
"Context Informed Generative (CIG) Model: An obvious next step is to incorporate information from the curated text s as well.",
"We can concatenate the document and the curated text, and produce an encoded representation of this sequence.",
"This approach incorporates information from both sources, though it does not differentiate them clearly.",
"Thus, the model may struggle to identify which pieces of information are novel with respect to the curated text.",
"To clearly identify the information that is already present in the curated text s , a model could encode s and d separately, then incorporate both signals into the generative procedure.",
"Context Receptive Generative (CRG) Model: Our next step was to condition our generative process more concretely on the curated text s .",
"We condition the generative process on the representation of s at each time step.",
"Formally: z d = Encoder d ( d i , d ) (4) z s = Encoder s ( s i , s ) (5) x i (cid:89) t p ( x ti | [ x <ti ; z s ] , z d ) (6) where, d and s are the parameters of the encoder for the document d and encoder for the curated text s respectively, z d and z s are the encoded representations of the document d i and curated text s i respectively.",
"At each time step of generation, the output is conditioned on the tokens generated up to the time step t concatenated with z s .",
"Hence, the generative process is receptive of the context at each time step.",
"Generative models that construct new sentences conditioned on the relevant context are compelling",
"but have a number of modeling challenges.",
"Such a model must both select the most relevant content and generate a fluent linguistic realization of this information.",
"We also consider extractive models: approaches that select the most relevant sentence from the document d to append to the curated text s .",
"These approaches can focus solely on the content selection problem and ignore the difficulties of generation.",
"This simplification does come at a cost: the most effective sentence to add might require only a subset of information from some sentence in the document, or incorporate information from more than one sentence.",
"Sum-Basic (SB): One common baseline is Sum-Basic, an extractive summarization technique that relies on word frequency statistics to select salient sentences (Nenkova and Vanderwende, 2005).",
"As an initial step, unigram probabilities are computed from the set of input documents using relative frequency estimation.",
"Then, sentences are selected one-by-one in greedy rounds until the summary budget is saturated.",
"At each round, this model selects the most likely sentence according to the current unigram distribution.",
"The selected sentence is added to the summary and removed from the pool of available sentences.",
"The unigram probabilities of all words in the selected sentence are heuristically discounted (replaced by square root).",
"Select-then-discount operations continue until the summary is written.",
"Discounting is crucial to prevent repetition: once a word (or ideally a concept) has been selected for the summary, it is much less likely to be picked in a subsequent round.",
"We use Sum-Basic as a Context Agnostic extractive model: we provide the document d as an input to the model and run Sum-Basic for exactly one round.",
"The selected sentence is considered to be the update sentence x .",
"Context Informed Sum-Basic (CISB): We developed a simple modification of the Sum-basic technique to incorporate information from the curated text s as context.",
"Initial unigram probabilities are computed using word counts from both the curated text and the document.",
"Next, for each sentence in the curated text, we apply just the discount procedure, updating the probability distribution as if those sentences were selected.",
"Finally, we select the single sentence from the document that is most likely according to the resulting disFigure 2: Dataset creation process counted unigram probabilities.",
"This simple modification of Sum-Basic helps select a sentence that is novel with respect to the curated text by lowering the probability of all words already present.",
"Extractive CAG, CIG, CRG Models: Any generative model of x can also be used as an extractive model: we simply estimate the likelihood of each sentence in the document according to the model, and select the most likely one.",
"Generative models may fail because either they are unable to select the most relevant information, or because the resulting sentence is ill-formed.",
"Extractive ranking circumvents all errors due to generation and can help isolate model issues.",
"Hybrid CAG, CIG, CRG Models: Since the document d can be quite large, a generative model may struggle to pick the most salient information based on the context.",
"To simplify the generative modeling task, we can pre-filter the document toward only the most salient parts.",
"We use the Context Informed Sum-Basic technique to first select the top five sentences from the document.",
"We supply only these five sentences in place of the source document d , then apply the CAG, CIG, and CRG techniques described above.",
"Our ideal dataset would capture the edits made to some curated reference text in light of a stream of new articles describing changes.",
"For instance, one might maintain reference software documentation about a system, making additions or changes in light of incoming emails describing updates or additions.",
"This type of data is unfortunately difficult to obtain due to privacy considerations.",
"However, Wikipedia can provide a naturally-occurring body of text with references to primary sources.",
"A substantial fraction of Wikipedia sentences include citations to supporting documentation, a ripe source of data for content transfer.",
"That Corpus Input Output #Examples Rouge-1 R Gigaword (Graff and Cieri, 2003) 10 1 10 1 10 6 78.7 CNN/DailyMail (Nallapati et al., 2016) 10 2 10 3 10 1 10 5 76.1 WikiSum (Liu et al., 2018) 10 2 10 6 10 1 10 3 10 6 59.2 Content Transfer (this paper) 10 1 10 3 10 1 10 2 10 5 66.9 Table 1: Key characteristics of the dataset: approximate size of input and output instances, approximate dataset size, and recall of reference output against the source material, as a measure of dataset difficulty.",
"said, some of the citations are quite difficult to follow or trust: broken URLs might lead to lost information; citations to books are difficult to consume given the large scope of information; etc.",
"Therefore, we only consider cases where the reference links to some well-known news sources.",
"Based on citation frequency, we selected a list of 86 domains, 3 primarily news outlets.",
"During the data creation process we only considered citations belonging to one of these eighty six domains.",
"We make this simplifying assumption for several reasons.",
"First, our English Wikipedia dump contained approximately 23.7 million citation URLS belonging to 1.6 million domains; fine-grained fil-tering would be a daunting task.",
"Our hand-vetted list of domains is a high-precision (albeit low-recall) means of selecting clean data.",
"Second, we wanted to ground the generated text on credible, consistent, and well-written sources of information.",
"Furthermore, well-known domains are readily available on Common Crawl, 4 leading to an easily-reproducible dataset.",
"Fig. 2 illustrates the procedure used to create a dataset for the task described in Section 2 from Wikipedia.",
"For each Wikipedia article, we extracted the plain text without markdown.",
"When encountering a citation belonging to a selected do-main, we considered the sentence just before the citation to be generated based on the content of the citation.",
"This sentence became our reference update sentence: the additional update sentence x added to the curated text s to produce the new text s (cid:48) .",
"The k sentences prior to the target sentence in the Wikipedia article were considered to be the curated text s .",
"In our case, we used a window of k = 3 sentences to select our context.",
"The cited article acted as the document d , from which the appropriate update x can be generated.",
"The HTML source of the citation was down-3 This list is provided in the data release of this paper.",
"loaded from Common Crawl for reproducibility and consistency.",
"The HTML derived from Common Crawl is then processed to get the plain text of the news article.",
"The resulting dataset C consists of aligned tuples C = (cid:0) d i , s i , x i (cid:1) i [1 ,n ] , where n is the total number of samples in the dataset.",
"Alternatively, one might rely on Wikipedia edit history to create a dataset.",
"In this setting, edits which include a new citation would act as the update x .",
"Although this has the upside of identifying potentially complex, multi-sentence updates, preliminary analysis suggested that these edits are noisy.",
"Editors may first generate the content in one edit, then add the citation in a subsequent edit, they may only rephrase a part of the text while adding the citation, or they may check in a range of changes across the document in a single edit.",
"Our simpler sentence-based approach leads to an interesting dataset with fewer complications.",
"Dataset Statistics and Analysis Table 1 describes some key statistics of our dataset and how it compares with other datasets used for similar tasks.",
"The ROUGE-1 recall scores of reference output x against document d suggest this task will be difficult for conventional extractive summarization techniques.",
"5 We hypothesize that during content transfer, the language in document d often undergoes substantial transformations to fit the curated text s .",
"The average unigram overlap (after stopword removal) between the document d and the reference update sentence x is 55 .",
"79 %; overlap of the curated text s and the reference update sentence x is 30 .",
"12 %.",
"This suggests the reference update sentence x can be derived from the document d , though not extracted directly.",
"Furthermore, the content of x is very different from the content of s but appears topically related.",
"Our dataset consists of approximately 290k unique Wikipedia articles.",
"Some heavily-cited 5 ROUGE-1 recall was computed on a sample of 50k instances from the entire dataset.",
"articles include Timeline of investigations into Trump and Russia (2017)', List of England Test cricketers', and 2013 in science'.",
"We randomly split the dataset into 580k training instances, 6049 validation instances, and 50k test instances, ensuring that any Wikipedia article appearing in the train set must not appear in validation or test.",
"We evaluate our models using both automated metrics and, for a subset of promising systems, human assessment.",
"One key evaluation is the similarity between the model generated update sentence and reference update sentence.",
"We also ask human judges to assess grammaticality and coherence.",
"Hyper-parameter settings: For all our experiments with generative models, we have used bidirectional encoder, 2 layers in encoder and decoder, RNN size of 128, word vector size of 100.",
"We have used sentencepiece toolkit 6 to use byte-pair-encoding (BPE) with a vocabulary size of 32k.",
"We used stochastic gradient descent optimizer and the stopping criterion was perplexity on the validation set.",
"We filtered our dataset to contain instances which have length of the document between 50 and 2000 tokens, length of the curated text between 20 and 500 tokens and the length of the update sentence between 5 and 200 tokens.",
"Our primary automated evaluation metric for system-generated update sentences is ROUGE-L F1 against reference update sentence, 7 though we also include BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2011) as additional indicators.",
"ROUGE is a standard family of metrics for summarization tasks; ROUGE-L measures the longest common subsequence between the system and the reference, capturing both lexical selection and word order.",
"Table 2 illustrates that this task is quite difficult for extractive techniques.",
"Furthermore, the results emphasize the importance of having curated text as context when generating the update.",
"In all experimental conditions, models aware of context perform much better than models agnostic of it.",
"In contrast to Liu et al. (2018), generative approaches 6 https://github.com/google/sentencepiece 7 We use the pyrouge toolkit along with ROUGE-1.5.5: https://github.com/bheinzerling/pyrouge Model ROUGE-LBLEUMETEORSB 5.6 (5.65.7) 0.6 2.0 CISB 7.0 (7.07.1) 1.0 2.8 CAG 9.1 (9.09.2) 1.2 4.6 COG 13.5 (13.413.6) 1.7 3.5 CIG 16.0 (15.9-16.1) 3.5 5.3 CRG 14.7 (14.614.8) 2.6 4.5 Hybrid CAG 8.0 (7.98.0) 1.0 3.8 Hybrid CIG 15.0 (14.915.1) 2.7 4.7 Hybrid CRG 13.5 (13.413.6) 2.3 4.1 Extractive CAG 9.3 (9.29.3) 1.1 3.2 Extractive CIG 9.3 (9.29.3) 1.1 3.2 Extractive CRG 9.2 (9.19.3) 1.1 3.2 Oracle 28.8 (28.729.0) 11.0 10.9 Table 2: Automated metrics; 95% confidence interval in parentheses.",
"outperformed hybrid, likely because we only had a single input document.",
"Extractive CAG, CIG, and CRG all outperformed both Sum-Basic and the context informed variant.",
"Extractive CAG was on-par with generative CAG, suggesting the generated sentences were of reasonable quality.",
"However, generative CIG and CRG were substantially better: rewriting to match context was beneficial.",
"The Oracle system of Table 2 aims to establish an upper limit attainable by extractive methods, using the following oracle experiment: For each test instance (cid:0) d i , s i , x i (cid:1) , we enumerate each extracted sentence e of document d i and select the one with highest ROUGE-L score as Oracle 's update sentence x i (i.e., x i = arg max e d i ROUGE-L ( x i , e ) ).",
"Note this yields a very optimistic upper bound, as the same ground truth x i is used both to select an extractive sentence from a large pool of candidates and for final automatic metric scoring.",
"8 Nevertheless, these oracle results let us draw two conclusions: (1) They give us better perspective to assess the non-oracle systems, and we believe that their seemingly low 8 Previous work has shown that this type of oracle can yield upper bounds that are unrealistically high, and they tend to be above human performance (Och et al., 2004, Table 1).",
"One remedy suggested by Och et al. is a round-robin oracle ensuring that the reference (ground truth) used by the argmax is distinct from that of the final automatic evaluation, but that scheme is only possible with a multi-reference test set.",
"automatic evaluation scores are quite reasonable relative to the optimistic upper bound (e.g., CIGs ROUGE-Ls score is 55% of the oracle).",
"(2) The oracle results suggest that humans are substantially changing the surface realization as they summarize for Wikipedia, as otherwise the oracle results would be much closer to maximum metric scores (i.e., 100%).",
"This shows that extractive methods are not enough for this task, justifying our use of generation techniques.",
"For careful evaluation of the performance of the most promising configurations (CAG and CIG models) we also asked human judges for quality assessments.",
"We solicited several types of evaluation, including two relative comparisons between pairs of system outputs and an absolute quality evaluation of individual system outputs.",
"Close to reference (Relative): The first relative comparison measured how accurately the generated update reflected information in the reference update.",
"Here, the annotators saw only the reference update sentence and the outputs of two systems labeled A and B in a randomized order.",
"We asked the annotators Which system output is closest in meaning to the reference update?",
"The annotators could pick system A , system B , or indicate that neither was preferred.",
"This is a simple evaluation task though potentially biased toward the sole reference update.",
"Coherent to context (Relative): The second relative comparison measured whether the generated output contained salient information from the document written in a manner appropriate to the curated text.",
"The annotators saw the document d , the curated text s , and the outputs of the two systems A and B , again in a random order.",
"They were asked, Which system output is more accurate relative to the background information given in the snippet of the article?",
"Each judge had to consider whether the information fits with the curated text and also whether system-generated content could be supported by the document.",
"Four human judges each annotated 30 unique output pairs for these two relative comparison settings, a total of 240 relative judgments.",
"Table 3 shows the results: the context-aware CIG system was substantially better in both settings.",
"DUC Guidelines (Absolute): In addition, we performed an absolute quality evaluation following the guidelines from DUC 2007.",
"9 Each judge was presented with a single system output, then they were asked to evaluate five aspects of system output: grammaticality, non-redundancy, referential clarity, focus, and structure/coherence.",
"For each aspect, the judge provided an assessment on a five-point scale: (1) Very Poor, (2) Poor, (3) Barely Acceptable, (4) Good, (5) Very Good.",
"We gathered 120 additional judgments in this setting (4 judges, 30 outputs).",
"Again, context-aware CIG substantially outperforms CAG across the board, as seen in Table",
"4. Observations: Systems unaware of the curated text s tend to generate long updates with repeated frequent words or phrases.",
"Consider the ratio of unique tokens over the total number of tokens in the generated output, which we denote by R .",
"A small R indicates many repeated tokens.",
"We find that 88 % of the time this ratio R falls below 0 .",
"5 for the CAG model, i.e. for 88 % instances, more than 50 % of the words in the generated output are repeats.",
"This number is relatively small 14 % for CIG and 20 % for CRG in context aware models.",
"In the reference updates only 0 .",
"21 % instances repeat more than 50 % of words.",
"Figs.",
"3 and 4 show good and bad examples generated by the CIG model along with the document, curated text and the reference update.",
"Table 5 has a set of updates generated by the CIG model as 9 http://duc.nist.gov/duc2007/ quality-questions.txt Document (News Article) sequels are fairly new to bollywood, but director sanjay gadhvi realised there was cash to be made from resurrecting his hit action thriller dhoom, by casting sexy young stars like hrithik rosha, aishwarya rai and abhishek bachchan in an even bigger game of cops and robbes...that the twist in dhoom 2's tail is not explained is yet another shortcoming.",
"it's only roshan's charismatic performance as the criminal mastermind, and the sizzling chemistry he shares with rai's sassy cohort, that rescues this adventure from becoming an elongated tourism commercial.",
"Curated Text (Wikipedia Context) it makes no lasting contributions to world cinema, but if two-and-a-half hours of disposable entertainment are all you're after, you could do far worse.",
"l.a. weekly's david chute stated the film was, a movie meal as satisfying as this one can make you feel that nothing else",
"matters. jaspreet pandohar of the bbc gave it a two-star rating, writing by roping in acclaimed action director alan amin to take care of the thrills and spills, you'd expect gadhvi to have spent time crafting out a sophisticated storyline instead of simply sending his cast on a cat-and-mouse chase around the globe. Reference Update it's only roshan's charismatic performance as the criminal mastermind, and the sizzling chemistry he shares with rai's sassy cohort, that rescues this adventure from becoming an elongated tourism commercial.",
"well as the reference update.",
"As we can see in examples 3 and 4, the CIG model misplaces the date but correctly generates the remaining content.",
"In examples 1 and 2, the CIG model appears to successfully select the correct pronouns for co-reference resolution, though it gets confused as to when to use the pronoun or the named entity.",
"Examples 5 and 6 represent failure cases due to missing words.",
"The proposed content transfer task is clearly related to a long series of papers in summarization, including recent work with neural techniques (Rush et al., 2015; Nallapati et al., 2016).",
"In particular, one recent paper casts the the task of generating an entire Wikipedia article as a multi-document summarization problem (Liu et al., 2018).",
"Their best-performing configuration was a two-stage extractive-abstractive framework; a multi-stage approach helped circumvent the diffi-Document (News Article) anne kirkbride, who portrayed bespectacled, gravelly-voiced deirdre barlow in coronation street for more that four decades, has died.",
"the 60-year-old, whose first appearance in the soap opera was in 1972, died in a manchester hospital after a short illness....",
"kirkbride had left the soap opera after she was diagnosed with non-hodgkin's lymphoma in 1993 but returned some months later after treatment and spoke candidly about how she had struggled with depression following the diagnosis...",
"Curated Text (Wikipedia Context) in 1993, kirkbride was diagnosis with non-hodgkin's lymphoma.",
"she spoke to the british press about her bout of depression following the diagnosis.",
"she was cured within a year of being diagnosed.",
"she was diagnosed with non-hodgkin's lymphoma.",
"culties of purely abstractive methods given quite large input token sequences.",
"Looking beyond the clear task similarity of authoring Wikipedia style content, there are several crucial differences in our approach.",
"First, the goal of that paper is to author the whole page, starting from nothing more than a set of primary sources, such as news articles.",
"In practice, however, Wikipedia articles often contain information outside these primary sources, including common sense knowledge, framing statements to set the article in context, and inferences made from those primary sources.",
"Our task restricts the focus to content where a human editor explicitly decided to cite some external source.",
"Hence, it is much more likely that the resulting summary can be derived from the external source content.",
"Furthermore, we focus on the act of adding information to existing articles, rather than writing a complete article without any context.",
"These two scenarios are clearly useful yet complementary: sometimes people want to produce a new reference text where nothing existed before; in other cases the goal is to maintain and update an existing reference.",
"Another closely related task is update summarization (Dang and Owczarzak, 2008), where systems attempt to provide a brief summary of the novel information in a new article assuming the user has read a known set of prior documents.",
"Our focus on curating an authoritative resource Reference Update Generated Update",
"is a substantial difference.",
"Also our datasets are substantially larger, enabling generative models to be used in this space, where prior update summarization techniques have been primarily extractive (Fisher and Roark, 2008; Li et al., 2015).",
"For any generation task, it is important to address both the content (what' is being said) as well its style (how' it is being said).",
"Recently, a great deal of research has focused on the how' (Li et al., 2018; Shen et al., 2017), including efforts to collect a parallel dataset that differs in politeness (Rao and Tetreault, 2018), to control author characteristics in the generated sentences (Prabhu-moye et al., 2018), to control the perceived personality traits of dialog responses (Zhang et al., 2018).",
"We believe this research thread is complementary to our efforts on generating the what'.",
"Another form of content transfer bridges across modalities: text generation given schematized or semi-structured information.",
"Recent research has addressed neural natural language generation techniques given a range of structured sources: selecting relevant database records and generating natural language descriptions of them (Mei et al., 2016), selecting and describing slot-value pairs for task-specific dialog response generation (Wen et al., 2015), and even generating Wikipedia biography abstracts given Infobox information (Lebret et al., 2016).",
"Our task, while grounded in external content, is different in that it leverages linguistic grounding as well as prior text context when generating text.",
"This challenging setting enables a huge range of grounded generation tasks: there are vast amounts of unstructured textual data.",
"This article highlights the importance of the task of content transfer : generation guided by an existing curated text to set context and tone, and grounded in a new source providing useful information.",
"information.",
"We demonstrate how multiple models can address this challenging problem on a novel dataset derived from Wikipedia and Common Crawl.",
"This dataset is released to the community along with scripts and models.",
"10 We find this setting particularly promising given the opportunity for human interaction: in contrast to approaches that do not rely on human-generated context, we establish a collaboration between user and computer.",
"Each newly suggested sentence can be rejected, accepted, or edited before inclusion, and the edits can provide more training data.",
"We believe there are many natural extensions to this work.",
"The models described here are mostly extensions of existing approaches; approaches targeting novelty detection, focus, and document structure could lead to substantial improvements.",
"We could apply models in series to incorporate changes for a set of documents.",
"Future work could also explore changes that modify existing content rather than simply appending.",
"We are grateful to the anonymous reviewers, as well as Alan W. Black, Chris Brockett, Bill Dolan, Sujay Jauhar, Michael Gamon, Jianfeng Gao, Dheeraj Rajagopal, and Xuchao Zhang for their helpful comments and suggestions on this work.",
"We also thank Emily Ahn, Khyati Chandu, Ankush Das, Priyank Lathwal, and Dheeraj Ra-jagopal for their help with the human evaluation."
] | [
"abstain",
"abstain",
"method",
"result",
"objective",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"result",
"objective",
"abstain",
"method",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"other"
] |
[
"Homographs, words with different meanings but the same surface form, have long caused difficulty for machine translation systems, as it is difficult to select the correct translation based on the context.",
"However, with the ad-vent of neural machine translation (NMT) systems, which can theoretically take into account global sentential context, one may hypothesize that this problem has been alleviated.",
"In this paper, we first provide empirical evidence that existing NMT systems in fact still have significant problems in properly translating ambiguous words.",
"We then proceed to describe methods, inspired by the word sense disambiguation literature, that model the context of the input word with context-aware word embeddings that help to differentiate the word sense before feeding it into the encoder.",
"Experiments on three language pairs demonstrate that such models improve the performance of NMT systems both in terms of BLEU score and in the accuracy of translating homographs.",
"1 1 Introduction Neural machine translation (NMT; Sutskever et al. (2014); Bahdanau et al. (2015), 2), a method for MT that performs translation in an end-to-end fashion using neural networks, is quickly becoming the de-facto standard in MT applications due to its impressive empirical results.",
"One of the drivers behind these results is the ability of NMT to capture long-distance context using recurrent neural networks in both the encoder, which takes the input and turns it into a continuous-space representation, and the decoder, which tracks the Equal contribution.",
"target-sentence state, deciding which word to output next.",
"As a result of this ability to capture long-distance dependencies, NMT has achieved great improvements in a number of areas that have bedeviled traditional methods such as phrase-based MT (PBMT; Koehn et al. (2003)), including agreement and long-distance syntactic dependencies (Neubig et al., 2015; Bentivogli et al., 2016).",
"One other phenomenon that was poorly handled by PBMT was homographs words that have the same surface form but multiple senses.",
"As a result, PBMT systems required specific separate modules to incorporate long-term context, performing word-sense (Carpuat and Wu, 2007b; Pu et al., 2017) or phrase-sense (Carpuat and Wu, 2007a) disambiguation to improve their handling of these phenomena.",
"Thus, we may wonder: do NMT systems suffer from the same problems when translating homographs?",
"Or are the recurrent nets applied in the encoding step, and the strong language model in the decoding step enough to alleviate all problems of word sense ambiguity?",
"In 3 we first attempt to answer this question quantitatively by examining the word translation 1336 accuracy of a baseline NMT system as a function of the number of senses that each word has.",
"Results demonstrate that standard NMT systems make a significant number of errors on homographs, a few of which are shown in Fig.",
"1. With this result in hand, we propose a method for more directly capturing contextual information that may help disambiguate difficult-to-translate homographs.",
"Specifically, we learn from neural models for word sense disambiguation (Kalch-brenner et al., 2014; Iyyer et al., 2015; Kageback and Salomonsson, 2016; Yuan et al., 2016; Suster et al., 2016), examining three methods inspired by this literature ( 4).",
"In order to incorporate this information into NMT, we examine two methods: gating the word-embeddings in the model (simi-larly to Choi et al. (2017)), and concatenating the context-aware representation to the word embedding ( 5).",
"To evaluate the effectiveness of our method, we compare our context-aware models with a strong baseline (Luong et al., 2015) on the English-German, English-French, and English-Chinese WMT dataset.",
"We show that our proposed model outperforms the baseline in the overall BLEU score across three different language pairs.",
"Quantitative analysis demonstrates that our model performs better on translating homographs.",
"Lastly, we show sample translations of the baseline system and our proposed model.",
"We follow the global-general-attention NMT architecture with input-feeding proposed by Luong et al. (2015), which we will briefly summarize here.",
"The neural network models the conditional distribution over translations Y = ( y 1 , y 2 , . . . , y m ) given a sentence in source language X = ( x 1 , x 2 , . . . x n ) as P ( Y | X ) .",
"A NMT system consists of an encoder that summarizes the source sentence X as a vector representation h , and a decoder that generates a target word at each time step conditioned on both h and previous words.",
"The conditional distribution is optimized with cross-entropy loss at each decoder output.",
"The encoder is usually a uni-directional or bidirectional RNN that reads the input sentence word by word.",
"In the more standard bi-directional case, before being read by the RNN unit, each word in X is mapped to an embedding in continuous vector space by a function f e .",
"M e R | V s | d is a matrix that maps a one-hot representation of x t , 1 ( x t ) to a d -dimensional vector space, and V s is the source vocabulary.",
"We call the word embedding computed this way Lookup embedding.",
"The word embeddings are then read by a bi-directional RNN h t = RNN e ( h t 1 , f e ( x t )) (2) h t = RNN e ( h t +1 , f e ( x t )) (3) After being read by both RNNs we can compute the actual hidden state at step t , h t = [ h t ; h t ] , and the encoder summarized representation h = h n .",
"The recurrent units RNN e and RNN e are usually either LSTMs (Hochreiter and Schmidhuber, 1997) or GRUs (Chung et al., 2014).",
"The decoder is a uni-directional RNN that decodes the t th target word conditioned on (1) previous decoder hidden state g t 1 , (2) previous word y t 1 , and (3) the weighted sum of encoder hidden states a t .",
"The decoder maintains the t th hidden state g t as follows, g t = RNN d ( g t 1 , f d ( y t 1 ) , a t ) (4) Again, RNN d is either LSTM or GRU, and f d is a mapping function in target language space.",
"The general attention mechanism for computing the weighted encoder hidden states a t first computes the similarity between g t 1 and h t 0 for t 0 = 1 , 2 , . . . , n .",
"score ( g t 1 , h t 0 ) = g t 1 W att h > t 0 (5) The similarities are then normalized through a softmax layer , which results in the weights for encoder hidden states.",
"t,t 0 = exp( score ( g t 1 , h t 0 )) P nk =1 exp( score ( g t 1 , h k )) (6) We can then compute a t as follows, a t = n X k =1 t,k h k (7) Finally, we compute the distribution over y t as, g t = tanh ( W 1 [ g t ; a t ]) (8) p ( y t | y <t , X ) = softmax ( W 2 g t ) (9) 1337 0 5 10 15 20 number of senses 0.1 0.2 0.3 0.4 0.5 0.6 0.7 F 1 en-de en-fr en-zh Figure 2: Translation performance of words with different numbers of senses.",
"As described in Eqs.",
"(2) and (3), NMT models encode the words using recurrent encoders, theoretically endowing them with the ability to handle homographs through global sentential context.",
"However, despite the fact that they have this ability, our qualitative observation of NMT results revealed a significant number of ambiguous words being translated incorrectly, casting doubt on whether the standard NMT setup is able to appropriately learn parameters that disambiguate these word choices.",
"To demonstrate this more concretely, in Fig. 2 we show the translation accuracy of an NMT system with respect to words of varying levels of ambiguity.",
"Specifically, we use the best baseline NMT system to translate three different language pairs from WMT test set (detailed in 6) and plot the F1-score of word translations by the number of senses that they have.",
"The number of senses for a word is acquired from the Cambridge English dictionary, 2 after excluding stop words.",
"3 We evaluate the translation performance of words in the source side by aligning them to the target side using fast-align (Dyer et al., 2013).",
"The aligner outputs a set of target words to which the source words aligns for both the reference translation and the model translations.",
"F1 score is calculated between the two sets of words.",
"After acquiring the F1 score for each word, we bucket the F1 scores by the number of senses, and plot the average score of four consecutive buckets as shown in Fig.",
"2. As we can see from the results, the F1 score for words decreases as the number of senses increases for three different language 2 http://dictionary.cambridge.org/us/ dictionary/english/ 3 We use the stop word list from NLTK (Bird et al., 2009).",
"pairs.",
"This demonstrates that the translation performance of current NMT systems on words with more senses is significantly decreased from that for words with fewer senses.",
"From this result, it is evident that modern NMT architectures are not enough to resolve the problem of homographs on their own.",
"The result corresponds to the findings in prior work (Rios et al., 2017).",
"Word sense disambiguation (WSD) is the task of resolving the ambiguity of homographs (Ng and Lee, 1996; Mihalcea and Faruque, 2004; Zhong and Ng, 2010; Di Marco and Navigli, 2013; Chen et al., 2014; Camacho-Collados et al., 2015), and we hypothesize that by learning from these models we can improve the ability of the NMT model to choose the correct translation for these ambiguous words.",
"Recent research tackles this problem with neural models and has shown state-of-the art results on WSD datasets (Kageback and Salomonsson, 2016; Yuan et al., 2016).",
"In this section, we will summarize three methods for WSD which we will further utilize as three different context networks to improve NMT.",
"Neural bag-of-words (NBOW) Kalchbrenner et al. (2014); Iyyer et al. (2015) have shown success by representing full sentences with a context vector, which is the average of the Lookup embeddings of the input sequence c t = 1 n n X k =1 M > c 1 ( x k ) (10) This is a simple way to model sentences, but has the potential to capture the global topic of the sentence in a straightforward and coherent way.",
"However, in this case, the context vector would be the same for every word in the input sequence.",
"Bi-directional LSTM (BiLSTM) Kageback and Salomonsson (2016) leveraged a bidirectional LSTM that learns a context vector for the target word in the input sequence and predicts the word sense with a multi-layer perceptron.",
"Specifically, we can compute the context vector c t for t th word similarly to bi-directional encoder as follows, c t = RNN c ( c t 1 , f c ( x t )) (11) c t = RNN c ( c t +1 , f c ( x t )) (12) 1338 c t = [ c t ; c t ] (13) RNN c , RNN c are forward and backward LSTMs repectively, and f c ( x t ) = M > c 1 ( x t ) is a function that maps a word to continous embedding space.",
"Held-out LSTM (HoLSTM) Yuan et al. (2016) trained a LSTM language model, which predicts a held-out word given the surrounding context, with a large amount of unlabeled text as training data.",
"Given the context vector from this language model, they predict the word sense with a WSD classifier.",
"Specifically, we can compute the context vector c t for t th word by first replacing t th word with a special symbol (e.g. < $ > ).",
"We then feed the replaced sequence to a uni-directional LSTM: c i = RNN c ( c i 1 , f c ( x i )) (14) Finally, we can get context vector for the t th word c t = c n (15) RNN c and f c are defined in BiLSTM paragraph, and n is the length of the sequence.",
"Despite the fact that the context vector is always the last hidden state of the LSTM no matter which word we are targeting, the input sequence read by the HoLSTM is actually different every time.",
"Now that we have several methods to incorporate global context regarding a single word, it is necessary to incorporate this context with NMT.",
"Specifically, we propose two methods to either Gate or Concatenate a context vector c t with the Lookup embedding M > e 1 ( x t ) to form a context-aware word embedding before feeding it into the encoder as shown in Fig.",
"3. The detail of these methods is described below.",
"Gate Inspired by Choi et al. (2017), as our first method for integration of context-aware word embeddings, we use a gating function as follows: f 0 e ( x t ) = f e ( x t ) (cid:12) ( c t ) (16) = M > e 1 ( x t ) (cid:12) ( c t ) (17) The symbol (cid:12) represents element-wise multiplication, and is element-wise sigmoid function.",
"Choi et al. (2017) use this method in concert with averaged embeddings from words in source language like the NBOW model above, which naturally uses the same context vectors for all time steps.",
"In this paper, we additionally test this function with context vectors calculated using the BiLSTM and HoLSTM .",
"Concatenate We also propose another way for incorporating context: by concatenating the context vector with the word embeddings.",
"This is expressed as below: f 0 e ( x t ) = W 3 [ f e ( x t ); c t ] (18) = W 3 [ M > e 1 ( x t ); c t ] (19) W 3 is used to project the concatenated vector back to the original d -dimensional space.",
"For each method can compute context vector c t with either the NBOW, BiLSTM, or HoLSTM described in",
"4. We share the parameters in f e with f c (i.e. M e = M c ) since the vocabulary space is the same for context network and encoder.",
"As a result, our context network only slightly increases the number of model parameters.",
"Details about the number of parameters of each model we use in the experiments are shown in Table",
"1. 6 Experiments We evaluate our model on three different language pairs: English-French (WMT'14), and English-German (WMT'15), English-Chinese (WMT'17) 1339 Context Integration uni/bi #layers #params Ppl WMT14 WMT15 None 2 85M 7.12 20.49 22.95 None 2 83M 7.20 21.05 23.83 None 3 86M 7.50 20.86 23.14 NBOW Concat 2 85M 7.23 20.44 22.83 NBOW Concat 2 83M 7.28 20.76 23.61 HoLSTM Concat 2 87M 7.19 20.67 23.05 HoLSTM Concat 2 86M 7.04 21.15 23.53 BiLSTM Concat 2 87M 6.88 21.80 24.52 BiLSTM Concat 2 85M 6.87 21.33 24.37 NBOW Gating 2 85M 7.14 20.20 22.94 NBOW Gating 2 83M 6.92 21.16 23.52 BiLSTM Gating 2 87M 7.07 20.94 23.58 BiLSTM Gating 2 85M 7.11 21.33 24.05 Table 1: WMT'14, WMT'15 English-German results We show perplexities (Ppl) on development set and tokenized BLEU on WMT'14 and WMT'15 test set of various NMT systems.",
"with English as the source side.",
"For German and French, we use a combination of Europarl v7, Common Crawl, and News Commentary as training set.",
"For development set, newstest2013 is used for German and newstest2012 is used for French.",
"For Chinese, we use a combination of News Commentary v12 and the CWMT Corpus as the training set and held out 2357 sentences as the development set.",
"Translation performances are reported in case-sensitive BLEU on newstest2014 (2737 sentences), newstest2015 (2169 sentences) for German, newstest2013 (3000 sentences), newstest2014 (3003 sentences) for French, and news-dev2017 (2002 sentences) for Chinese.",
"4 Details about tokenization are as follows.",
"For German, we use the tokenized dataset from Luong et al. (2015); for French, we used the moses (Koehn et al., 2007) tokenization script with the -a flag; for Chinese, we split sequences of Chinese characters, but keep sequences of non-Chinese characters as they are, using the script from IWSLT Evaluation 2015.",
"5 We compare our context-aware NMT systems with strong baseline models on each dataset.",
"We limit our vocabularies to be the top 50K most frequent words for both source and target language.",
"Words not in these shortlisted vocabularies are converted into an h unk i token.",
"When training our NMT systems, following Bahdanau et al. (2015), we filter out sentence pairs whose lengths exceed 50 words and shuffle mini-batches as we proceed.",
"We train our model with the following settings using SGD as our optimization method.",
"(1) We start with a learning rate of 1 and we begin to halve the learning rate every 1340 epoch once it overfits.",
"6 (2) We train until the model converges.",
"(i.e. the difference between the perplexity for the current epoch and the previous epoch is less than 0.01) (3) We batched the instances with the same length and our maximum mini-batch size is 256, and (4) the normalized gradient is rescaled whenever its norm exceeds 5.",
"(6) Dropout is applied between vertical RNN stacks with probability 0.3.",
"Additionally, the context network is trained jointly with the encoder-decoder architecture.",
"Our model is built upon OpenNMT (Klein et al., 2017) with the default settings unless otherwise noted.",
"In this section, we compare our proposed context-aware NMT models with baseline models on English-German dataset.",
"Our baseline models are encoder-decoder models using global-general attention and input feeding on the decoder side as described in 2, varying the settings on the encoder side.",
"Our proposed model builds upon baseline models by concatenating or gating different types of context vectors.",
"We use LSTM for encoder, decoder, and context network.",
"The decoder is the same across baseline models and proposed models, having 500 hidden units.",
"During testing, we use beam search with a beam size of 5.",
"The dimension for input word embedding d is set to 500 across encoder, decoder, and context network.",
"Settings for three different baselines are listed below.",
"Baseline 2: A bi-directional LSTM with 250 hidden units and 2 layers of stacking LSTM.",
"Each state is summarized by concatenating the hidden states of forward and backward encoder into 500 hidden units.",
"Baseline 3: A bi-directional LSTM with 250 hidden units and 3 layers of stacking LSTM.",
"This can be compared with the proposed method, which adds an extra layer of computation before the word embeddings, essentially adding an extra layer.",
"The context network uses the below settings.",
"BiLSTM: A single-layer bi-directional LSTM with 250 hidden units.",
"The context vector is represented by concatenating the hidden states of forward and backward LSTM into a 500 dimensional vector.",
"HoLSTM: A single-layer uni-directional LSTM with 500 hidden units.",
"The results are shown in Table",
"1. The first thing we observe is that the best context-aware model (results in bold in the table) achieved improvements of around 0.7 BLEU on both WMT14 and WMT15 over the respective baseline methods with 2 layers.",
"This is in contrast to simply using a 3-layer network, which actually degrades performance, perhaps due to the vanishing gradients problem it increases the difficulty in learning.",
"Next, comparing different methods for incorporating context, we can see that BiLSTM performs best across all settings.",
"HoLSTM performs slightly better than NBOW, and NBOW obviously suffers from having the same context vector for every word in the input sequence failing to outperform the corresponding baselines.",
"Comparing the two integration methods that incorporate context into word embeddings.",
"Both methods improve over the baseline with BiLSTM as the context network.",
"Concatenating the context vector and the word embedding performed better than gating.",
"Finally, in contrast to the baseline, it is not obvious whether using uni-directional or bi-directional as the encoder is better for our proposed models, particularly when BiLSTM is used for calculating the context network.",
"This is likely due to the fact that bi-directional information is already captured by the context network, and may not be necessary in the encoder itself.",
"We further compared the two systems on two different languages, French and Chinese.",
"We achieved 0.5-0.8 BLEU improvement, showing our proposed models are stable and consistent across different language pairs.",
"The results are shown in Table",
"2. To show that our 3-layer models are properly trained, we ran a 3-layer bidirectional encoder with residual networks on En-Fr and got 27.45 for WMT13 and 30.60 for WMT14, which is similarly lower than the two layer result.",
"It should be noted that previous work such as Britz et al. (2017) have 1341 language System Homograph All Words F1 Precision Recall F1 Precision Recall en de baseline 0.401 0.422 0.382 0.547 0.569 0.526 best 0.426 (+ 0.025 ) 0.449 (+ 0.027 ) 0.405 (+ 0.023 ) 0.553 (+ 0.006 ) 0.576 (+ 0.007 ) 0.532 (+ 0.006 ) en fr baseline 0.467 0.484 0.451 0.605 0.623 0.587 best 0.480 (+ 0.013 ) 0.496 (+ 0.012 ) 0.465 (+ 0.014 ) 0.613 (+ 0.008 ) 0.630 (+ 0.007 ) 0.596 (+ 0.009 ) en zh baseline 0.578 0.587 0.570 0.573 0.605 0.544 best 0.590 (+ 0.012 ) 0.599 (+ 0.012 ) 0.581 (+ 0.011 ) 0.581 (+ 0.008 ) 0.612 (+ 0.007 ) 0.552 (+ 0.008 ) Table 3: Translation results for homographs and all words in our NMT vocabulary.",
"In order to examine whether our proposed model can better translate words with multiple senses, we evaluate our context-aware model on a list of homographs extracted from Wikipedia 7 compared to the baseline model on three different language pairs.",
"For the baseline model, we choose the best-performing model, as described in 6.2.",
"To do so, we first acquire the translation of homographs in the source language using fast-align (Dyer et al., 2013).",
"We run fast-align on all the parallel corpora including training data and testing data 8 because the unsupervised nature of the algorithm requires it to have a large amount of training data to obtain accurate alignments.",
"The settings follow the default command on fast-align github page including heuristics combining forward and backward alignment.",
"Since there might be multiple aligned words in the target language given a word in source language, we treat a match between the aligned translation of a targeted word of the reference and the translation of a given model as true positives and use F1, precision, and recall as our metrics, and take the micro-average across all the sentence pairs.",
"9 We calculated the scores for the 50000 words/characters from our source vocabulary using only English words.",
"The results are shown in Table",
"3. The table shows two interesting results: (1) The score for the homographs is lower than the score obtained from all the words in the vocabu-7 https://en.wikipedia.org/wiki/List_ of_English_homographs 8 Reference translation, and all the system generated translations.",
"lary.",
"This shows that words with more meanings are harder to translate with Chinese as the only exception.",
"10 (2) The improvement of our proposed model over baseline model is larger on the homographs compared to all the words in vocabulary.",
"This shows that although our context-aware model is better overall, the improvements are particularly focused on words with multiple senses, which matches the intuition behind the design of the model.",
"We show sample translations on English-Chinese WMT'17 dataset in Table 4 with three kinds of examples.",
"We highlighted the English homograph in bold, correctly translated words in blue, and wrongly translated words in red.",
"(1) Target homographs are translated into the correct sense with the help of context network.",
"For the first sample translation, meets is correctly translated to by our model, and wrongly translated to by baseline model.",
"In fact, is closer to the definition come together intentionally and is closer to satisfy in the English dictionary.",
"(2) Target homographs are translated into different but similar senses for both models in the forth example.",
"Both models translate the word believed to common translations or , but these meaning are both close to reference translation .",
"(3) Target homograph is translated into the wrong sense for the baseline model, but is not translated in our model in the fifth example.",
"Word sense disambiguation (WSD), the task of determining the correct meaning or sense of a word in context is a long standing task in NLP (Yarowsky, 1995; Ng and Lee, 1996; Mihalcea and Faruque, 2004; Navigli, 2009; Zhong and Ng, 2010; Di Marco and Navigli, 2013; Chen et al., 2014; Camacho-Collados et al., 2015).",
"Recent research on tackling WSD and capturing multi-senses includes work leveraging LSTM (Kageback and Salomonsson, 2016; Yuan et al., 2016), which we extended as a context network in our paper and predicting senses with word embeddings that capture context.",
"Suster et al. (2016); Kawakami and Dyer (2016) also showed that bilingual data improves WSD.",
"In contrast to the standard WSD formulation, Vickrey et al. (2005) reformulated the task of WSD for Statistical Machine Translation (SMT) as predicting possible target translations which directly improves the accuracy of machine translation.",
"Following this reformulation, Chan et al. (2007); Carpuat and Wu (2007a,b) integrated WSD systems into phrase-based systems.",
"Xiong and Zhang (2014) breaks the process into two stages.",
"First predicts the sense of the ambiguous source word.",
"The predicted word senses together with other context features are then used to predict possible target translation.",
"Within the framework of Neural MT, there are works that has similar motivation to ours.",
"Choi et al. (2017) leverage the NBOW as context and gate the word-embedding on both encoder and decoder side.",
"However, their work does not distinguish context vectors for words in the same sequence, in contrast to the method in this paper, and our results demonstrate that this is an important feature of methods that handle homographs in NMT.",
"In addition, our quantitative analysis of the problems that homographs pose to NMT and evaluation of how context-aware models fix them was not covered in this previous work.",
"Rios et al. (2017) tackled the problem by adding sense embedding learned with additional corpus and evaluated the performance on the sentence level with contrastive translation.",
"Theoretically, NMT systems should be able to handle homographs if the encoder captures the clues to translate them correctly.",
"In this paper, we empirically show that this may not be the case; the performance of word level translation degrades as the number of senses for each word increases.",
"We hypothesize that this is due to the fact that each word is mapped to a word vector despite them being in different contexts, and propose to integrate methods from neural WSD systems into an NMT system to alleviate this problem.",
"We concatenated the context vector computed from the context network with the word embedding to form a context-aware word embedding, successfully improving the NMT system.",
"We evaluated our model on three different language pairs and outperformed a strong baseline model according to BLEU score in all of them.",
"We further evaluated our results targeting the translation of homographs, and our model performed better in terms of F1 score.",
"While the architectures proposed in this work do not solve the problem of homographs, our empirical results in Table 3 demonstrate that they do yield improvements (larger than those on other varieties of words).",
"We hope that this paper will spark discussion on the topic, and future work will propose even more focused architectures."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"method",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"abstain",
"result",
"objective",
"result",
"result",
"result",
"objective",
"objective"
] |
[
"Generative semantic hashing is a promising technique for large-scale information retrieval thanks to its fast retrieval speed and small memory footprint.",
"For the tractability of training, existing generative-hashing methods mostly assume a factorized form for the posterior distribution, enforcing independence among the bits of hash codes.",
"From the perspectives of both model representation and code space size, independence is always not the best assumption.",
"In this paper, to introduce correlations among the bits of hash codes, we propose to employ the distribution of Boltzmann machine as the variational posterior.",
"To address the intractability issue of training, we first develop an approximate method to reparameterize the distribution of a Boltzmann machine by augmenting it as a hierarchical concatenation of a Gaussian-like distribution and a Bernoulli distribution.",
"Based on that, an asymptotically-exact lower bound is further derived for the evidence lower bound (ELBO).",
"With these novel techniques, the entire model can be optimized efficiently.",
"Extensive experimental results demonstrate that by effectively modeling correlations among different bits within a hash code, our model can achieve significant performance gains.",
"Similarity search, also known as nearest-neighbor search, aims to find items that are similar to a query from a large dataset.",
"It plays an important role in modern information retrieval systems and has been used in various applications, ranging from plagiarism analysis (Stein et al., 2007) to content-based multimedia retrieval (Lew et al., 2006), etc .",
"However, looking for nearest neighbors in the Euclidean space is often computationally Corresponding author.",
"prohibitive for large-scale datasets (calculating co-sine similarity with high-dimensional vectors is computationally-expensive).",
"Semantic hashing circumvents this problem by representing semantically similar documents with compact and binary codes.",
"Accordingly, similar documents can be retrieved by evaluating the hamming distances of their hash codes much more efficiently.",
"To obtain similarity-preserving hash codes, extensive efforts have been made to learn hash functions that can preserve the similarity information of original documents in the binary embedding space (Shen et al., 2015; Liu et al., 2016).",
"Existing methods often require the availability of label information, which is often expensive to obtain in practice.",
"To avoid the use of labels, generative semantic hashing methods have been developed.",
"Specifically, the variational autoencoder (VAE) is first employed for semantic hashing in (Chaidaroon and Fang, 2017), and their model is termed VDSH.",
"As a two-step process, the continuous document representations obtained from VAE are directly converted into binary hash codes.",
"To resolve the two-step training problem, Bernoulli priors are leveraged as the prior distribution in NASH (Shen et al., 2018), replacing the continuous Gaussian prior in VDSH.",
"By utilizing straight-through (ST) technique (Bengio et al., 2013), their model can be trained in an end-to-end manner, while keeping the merits of VDSH.",
"Recently, to further improve the quality of hash codes, mixture priors are investigated in BMSH (Dong et al., 2019), while more accurate gradient estimators are studied in Doc2hash (Zhang and Zhu, 2019), both under a similar framework as NASH.",
"Due to the training-tractability issue, the aforementioned generative hashing methods all assume a factorized variational form for the posterior, e.g. , independent Gaussian in VDSH and independent Bernoulli in NASH, BMSH and Doc2hash.",
"This assumption prevents the models from capturing dependencies among the bits of hash codes.",
"Although uncorrelated bits are sometimes preferred in hashing, as reported in (Zhang and Li, 2014), this may not apply to generative semantic hashing.",
"This is due to the fact that the independent assumption could severely limit a model's ability to yield meaningful representations and thereby produce high-quality hash codes.",
"Moreover, as the code length increases (to e.g. 128 bits), the number of possible codes (or simply the code space) will be too large for a dataset with limited number of data points.",
"As a result, we advocate that correlations among bits of a hash code should be considered properly to restrict the embedding space, and thus enable a model to work effectively under a broad range of code lengths.",
"To introduce correlations among bits of hash codes, we propose to adopt the Boltzmann-machine (BM) distribution (Ackley et al., 1985) as a variational posterior to capture various complex correlations.",
"One issue with this setting, relative to existing efficient training methods, is the inefficiency brought in training.",
"To address this issue, we first prove that the BM distribution can be augmented as a hierarchical concatenation of a Gaussian-like distribution and a Bernoulli distribution.",
"Using this result, we then show that samples from BM distributions can be well reparameterized easily.",
"To enable efficient learning, an asymptotically-exact lower bound of the standard evidence lower bound (ELBO) is further developed to deal with the notorious problem of the normalization term in Boltzmann machines.",
"With the proposed reparameterization and the new lower bound, our model can be trained efficiently as the previous generative hashing models that preserve no bit correlations.",
"Extensive experiments are conducted to evaluate the performance of the proposed model.",
"It is observed that on all three public datasets considered, the proposed model achieves the best performance among all comparable models.",
"In particular, thanks to the introduced correlations, we observe the performance of the proposed model does not deteriorate as the code length increases.",
"This is surprising and somewhat contrary to what has been observed in other generative hashing models.",
"Generative Semantic Hashing In the context of generative semantic hashing, each document is represented by a sequence of words x =",
"{ w 1 , w 2 , , w | x | } , where w i is the i -th word and is denoted by a | V | -dimensional one-hot vector; | x | and | V | denotes the document size (number of words) and the vocabulary size, respectively.",
"Each document x is modeled by a joint probability: p ( x, s ) = p ( x | s ) p ( s ) , (1) where s is a latent variable representing the doc-ument's hash code.",
"With the probability p ( x, s ) trained on a set of documents, the hash code for a document x can be derived directly from the posterior distribution p ( s | x ) .",
"In existing works, the likelihood function, or the decoder takes a form p ( x | s ) = (cid:81) | x | i =1 p ( w i | s ) with p ( w i | s ) (cid:44) exp( s T Ew i + b i ) (cid:80) | V | j =1 exp( s T Ee j + b j ) , (2) where E R m | V | is the matrix connecting the latent code s and the one-hot representation of words; and e j is the one-hot vector with the only 1' locating at the i -th position.",
"Documents could be modelled better by using more expressive likelihood functions, e.g. , deep neural networks, but as explained in (Shen et al., 2018), they are more likely to destroy the crucial distance-keeping property for semantic hashing.",
"Thus, the simple form of (2) is often preferred in generative hashing.",
"As for the prior distribution p ( s ) , it is often chosen as the standard Gaussian distribution as in VDSH (Chaidaroon and Fang, 2017), or the Bernoulli distribution as in NASH and BMSH (Shen et al., 2018; Dong et al., 2019).",
"Inference Probabilistic models can be trained by maximizing the log-likelihood log p ( x ) with p ( x ) = (cid:82) s p ( x, s ) ds .",
"However, due to the intractability of calculating p ( x ) , we instead optimize its evidence lower bound (ELBO), i.e. , L = E q ( s | x ) (cid:20) log p ( x | s ) p ( s ) q ( s | x ) (cid:21) , (3) where q ( s | x ) is the proposed variational posterior parameterized by .",
"It can be shown that log p ( x ) L holds for any q ( s | x ) , and that if q ( s | x ) is closer to the true posterior p ( s | x ) , the bound L will be tighter.",
"Training then reduces to maximizing the lower bound L w.r.t. and .",
"In VDSH (Chaidaroon and Fang, 2017), q ( s | x ) takes the form of an independent Gaussian distribution q ( s | x ) = N (cid:0) s | ( x ) , diag( 2 ( x )) (cid:1) , (4) where ( x ) and ( x ) are two vector-valued functions parameterized by multi-layer perceptrons (MLP) with parameters .",
"Later, in NASH and BMSH (Shen et al., 2018; Dong et al., 2019), q ( s | x ) is defined as an independent Bernoulli distribution, i.e. , q ( s | x ) = Bernoulli( g ( x )) , (5) where g ( x ) is also vector-valued function parameterized by a MLP.",
"The value at each dimension represents the probability of being 1 at that position.",
"The MLP used to parameterize the posterior q ( s | x ) is also referred to as the encoder network.",
"One key requirement for efficient end-to-end training of generative hashing method is the availability of reparameterization for the variational distribution q ( s | x ) .",
"For example, when q ( s | x ) is a Gaussian distribution as in (4), a sample s from it can be efficiently reparameterized as s = ( x ) + ( x ) (cid:15) (6) with (cid:15) N (0 , I ) .",
"When q ( s | x ) is a Bernoulli distribution as in (5), a sample from it can be reparameterized as s = sign ( g ( x ) (cid:15) ) + 1 2 (7) where (cid:15) R m with elements (cid:15) i uniform(0 , 1) .",
"With these reparameterization tricks, the lower bound in (3) can be estimated by the sample s as L log p ( x | s ) p ( s ) q ( s | x ) , (8) where s has been denoted as s to explicitly indicate its dependence on .",
"To train these hashing models, the backpropagation algorithm can be employed to estimate the gradient of (8) w.r.t. and easily.",
"However, it is worth noting that in order to use the reparameterization trick, all existing methods assumed a factorized form for the proposed posterior q ( s | x ) , as shown in (4) and (5).",
"This suggests that the binary bits in hash codes are independent of each other, which is not the best setting in generative semantic hashing.",
"In this section, we present a scalable and efficient approach to introducing correlations into the bits of hash codes, by using a Boltzmann-machine distribution as the variational posterior with approximate reparameterization.",
"Many probability distributions defined over binary variables s { 0 , 1 } m are able to capture the dependencies.",
"Among them, the most famous one should be the Boltzmann-machine distribution (Ackley et al., 1985), which takes the following form: b ( s ) = 1 Z e 12 s T s + T s , (9) where R m m and R m are the distribution parameters; and Z (cid:44) (cid:80) s e 12 s T s + T s is the normalization constant.",
"The Boltzmann-machine distribution can be adopted to model correlations among the bits of a hash code.",
"Specifically, by restricting the posterior to the Boltzmann form q ( s | x ) = 1 Z e E ( s ) (10) and substituting it into the lower bound of (3), we can write the lower bound as: L = E q ( s | x ) (cid:20) log p ( x | s ) p ( s ) e E ( s ) (cid:21) + log Z , (11) where E ( s ) (cid:44) 12 s T ( x ) s T ( x ) s ; and ( x ) and ( x ) are functions parameterized by the encoder network with parameters and x as input.",
"One problem with such modeling is that the expectation term E q ( s | x ) [ ] in (11) cannot be expressed in a closed form due to the complexity of q ( s | x ) .",
"Consequently, one cannot directly optimize the lower bound L w.r.t. and .",
"An alternative way is to approximate the expectation term by using the reparameterized form of a sample s from q ( s | x ) , as was done in the previous uncorrelated generative hashing models (see (6) and (7)).",
"Compared to existing simple variational distributions, there is no existing work on how to reparameterize the complicated Boltzmann-machine distribution.",
"To this end, we first show that the Boltzmann-machine distribution can be equivalently written as the composition of an approximate correlated Gaussian distribution and a Bernoulli distribution.",
"Proposition 1. A Boltzmann-machine distribution b ( s ) = 1 Z e 12 s T s + T s with (cid:31) 0 can be equivalently expressed as the composition of two distributions, that is, b ( s ) = (cid:90) p ( s | r ) p ( r ) dr, (12) where p ( r ) = 1 Z (cid:81) mi =1 ( e r i + 1) N ( r ; , ) ; p ( s | r ) = (cid:81) mi =1 p ( s i | r i ) with s i and r i denoting the i -th element of s and r ; and p ( s i | r i ) (cid:44) Bernoulli( ( r i )) with ( ) being the sigmoid function.",
"Proof.",
"See Appendix A.1 for details.",
"Based on Proposition 1, we can see that a sample from the Boltzmann-machine distribution q ( s | x ) in (10) can be sampled hierarchically as r q ( r | x ) and s Bernoulli( ( r )) , (13) where q ( r | x )= 1 Z m (cid:89) i =1 ( e r i + 1) N ( r ; ( x ) , ( x )) (14) and ( ) is applied to its argument element-wise.",
"From the expression of q ( r | x ) , we can see that for small values of r i , the influence of ( e r i + 1) on the overall distribution is negligible, and thus q ( r | x ) can be well approximated by the Gaussian distribution N ( r ; ( x ) , ( x )) .",
"For relatively large r i , the term ( e r i + 1) will only influence the distribution mean, roughly shifting the Gaussian distribution N ( r ; ( x ) , ( x )) by an amount approximately equal to its variance.",
"For problems of interest in this paper, the variances of posterior distribution are often small, hence it is reasonable to approximate samples from q ( r | x ) by those from N ( r ; ( x ) , ( x )) .",
"With this approximation, we can now draw samples from Boltzmann-machine distribution q ( s | x ) in (10) approximately by the two steps below r N ( r ; ( x ) , ( x )) , (15) s Bernoulli( ( r )) .",
"where L ( x ) is the Cholesky decomposition matrix of ( x ) with ( x ) = L ( x ) L T ( x ) ; and (cid:15) R m with (cid:15) N (0 , I ) .",
"It should be noted that in practice, we can define the function L ( x ) in advance and then obtain ( x ) as ( x ) = L ( x ) L T ( x ) , thus the Cholesky decomposition is not needed.",
"Given the Gaussian sample r , similar to the reparameterization of Bernoulli variables in (7), we can reparameterize the Bernoulli sample s Bernoulli( ( r )) as s = sign( ( r ) u )+1 2 , where u R m with each element u i uniform(0 , 1) .",
"By combining the above reparameterizations, a sample from the Boltzmann-machine distribution q ( s | x ) can then be approximately reparameterized as s = sign ( ( ( x )+ L ( x ) (cid:15) ) u )+1 2 , (18) where the subscript is to explicitly indicate that the sample s is expressed in terms of .",
"With the reparameterization s , the expectation term in (11) can be approximated as log p ( x | s ) p ( s ) e E ( s ) .",
"Consequently, the gradients of this term w.r.t. both and can be evaluated efficiently by backpropagation, with the only difficulty lying at the non-differentiable function sign( ) of s in (18).",
"Many works have been devoted to estimate the gradient involving discrete random variables (Bengio et al., 2013; Jang et al., 2017; Maddison et al., 2017; Tucker et al., 2017; Grathwohl et al., 2018; Yin and Zhou, 2019).",
"Here, we adopt the simple straight-through (ST) technique (Bengio et al., 2013), which has been found performing well in many applications.",
"By simply treating the hard threshold function sign( ) as the identity function, the ST technique estimates the gradient as s 1 2 [ ( ( x ) + L ( x ) (cid:15) ) u ] .",
"Then, the gradient of the first term in ELBO L w.r.t. can be computed efficiently by backpropagation.",
"To optimize the ELBO in (11), we still need to calculate the gradient of log Z , which is known to be notoriously difficult.",
"A common way is to estimate the gradient log Z by MCMC methods (Tieleman, 2008; Desjardins et al., 2010; Su et al., 2017a,b), which are computationally expensive and often of high variance.",
"By noticing a special form of the ELBO (11), we develop a lower bound for the ELBO L , where the log Z term can be conveniently cancelled out .",
"Specifically, we introduce an-other probability distribution h ( s ) and lower bound the original ELBO: (cid:101) L = L KL( h ( s ) || q ( s | x )) .",
"Since KL( ) 0 , we have (cid:101) L ( , ) L holds for all h ( s ) , i.e. , (cid:101) L is a lower bound of L , and equals to the ELBOL when h ( s ) = q ( s | x ) .",
"For the choice of h ( s ) , it should be able to reduce the gap between (cid:101) L and L as much as possible, while ensuring that the optimization is tractable.",
"Balancing on the two sides, a mixture distribution is used h k ( s ) = 1 k k (cid:88) i =1 p ( s | r ( i ) ) , (21) where k denotes the number of components; p ( s | r ( i ) ) is the multivariate Bernoulli distribution and r ( i ) is the i -th sample drawn from q ( r | x ) as defined in (14).",
"By substituting h k ( s ) into (20) and taking the expectation w.r.t. r ( i ) , we have (cid:101) L k (cid:44) L E q ( r (1 k ) | x ) [KL( h k ( s ) || q ( s | x ))] (22) where q ( r (1 ,k ) | x ) = (cid:81) ki =1 q ( r ( i ) | x ) .",
"It can be proved that the bound (cid:101) L k gradually approaches the ELBO L as k increases, and finally equals to it as k .",
"Specifically, we have Proposition 2. For any integer k , the lower bound (cid:101) L k of the ELBO satisfies the conditions: 1) (cid:101) L k +1 (cid:101) L k ; 2) lim k (cid:101) L k = L .",
"(cid:101) L k = E q ( s | x ) (cid:20) log p ( x | s ) p ( s ) e E ( s ) (cid:21) E q ( r (1 k ) | x ) (cid:20) E h k ( s ) (cid:20) log h k ( s ) e E ( s ) (cid:21)(cid:21) , (23)",
"where the log Z term is cancelled out since it appears in both terms but has opposite signs.",
"For the first term in (23), as discussed at the end of Section 3.1, it can be approximated as log p ( x | s ) p ( s ) e E ( s ) .",
"For the second term, each sample r ( i ) for i = 1 , , k can be approximately reparameterized like that in (17).",
"Given the r ( i ) for i = 1 , , k , samples from h k ( s ) can also be reparameterized in a similar way as that for Bernoulli distributions in (7).",
"Thus, samples drawn from r (1 k ) q ( r (1 k ) | x ) and s h k ( s ) are also reparameterizable, as detailed in Appendix A.3.",
"By denoting this reparametrized sample as s , we can approximate the second term in (23) as log h k ( s ) e E ( s ) .",
"Thus the lower bound (23) becomes (cid:101) L k log p ( x | s ) p ( s ) e E ( s ) log h k ( s ) e E ( s ) .",
"With the discrete gradient estimation techniques like the ST method, the gradient of (cid:101) L k w.r.t. and can then be evaluated efficiently by backpropagation.",
"Proposition 2 indicates that the exact (cid:101) L k gets closer to the ELBO as k increases, so better bound can be expected for the approximated (cid:101) L k as well when k increases.",
"In practice, a moderate value of k is found to be sufficient to deliver a good performance.",
"In the reparameterization of a Gaussian sample, r = ( x ) + L ( x ) (cid:15) in (17), a m m matrix L ( x ) is required, with m denoting the length of hash codes.",
"The elements of L ( x ) are often designed as the outputs of neural networks parameterized by .",
"Therefore, if m is large, the number of neural network outputs will be too large.",
"To overcome this issue, a more parameter-efficient strategy called Low-Rank Perturbation is employed, which restricts covariance matrix to the form = D + UU (cid:62) , (25) where D is a diagonal matrix with positive entries and U = [ u 1 , u 2 , u v ] is a low-rank perturbation matrix with u i R m and v (cid:28) m .",
"Under this low-rank perturbed , the Gaussian samples can be reparameterized as r = ( x ) + D 1 / 2 ( x ) (cid:15) 1 + U ( x ) (cid:15) 2 , (26) where (cid:15) 1 N (0 , I m ) and (cid:15) 2 N (0 , I v ) .",
"We can simply replace (17) with the above expression in any place that uses r .",
"In this way, the number of neural network outputs can be dramatically reduced from m 2 to mv .",
"Semantic Hashing (Salakhutdinov and Hinton, 2009) is a promising technique for fast approximate similarity search.",
"Locality-Sensitive Hashing, one of the most popular hashing methods (Datar et al., 2004), projects documents into low-dimensional hash codes in a randomized manner.",
"However, the method does not leverage any information of data, and thus generally performs much worse than those data-dependent methods.",
"Among the data-dependent methods, one of the mainstream methods is supervised hashing, which learns a function that could output similar hash codes for semantically similar documents by making effective use of the label information (Shen et al., 2015; Liu et al., 2016).",
"Different from supervised methods, unsupervised hashing pays more attention to the intrinsic structure of data, without making use of the labels.",
"Spectral hashing (Weiss et al., 2009), for instance, learns balanced and uncorrelated hash codes by seeking to preserve a global similarity structure of documents.",
"Self-taught hashing (Zhang et al., 2010), on the other hand, focuses more on preserving local similarities among documents and presents a two-stage training procedure to obtain such hash codes.",
"In contrast, to generate high-quality hash codes, iterative quantization (Gong et al., 2013) aims to minimize the quantization error, while maximizing the variance of each bit at the same time.",
"Among the unsupervised hashing methods, the idea of generative semantic hashing has gained much interest in recent years.",
"Under the VAE framework, VDSH (Chaidaroon and Fang, 2017) was proposed to first learn continuous the docu-ments' latent representations, which are then cast into binary codes.",
"While semantic hashing is achieved with generative models nicely, the two-stage training procedure is problematic and is prone to result in local optima.",
"To address this issue, NASH (Shen et al., 2018) went one step further and presented an integrated framework to enable the end-to-end training by using the discrete Bernoulli prior and the ST technique, which is able to estimate the gradient of functions with discrete variables.",
"Since then, various directions have been explored to improve the performance of NASH.",
"(Dong et al., 2019) proposed to employ the mixture priors to improve the model's capability to distinguish documents from different categories, and thereby improving the quality of hash codes.",
"On the other hand, a more accurate gradient estimator called Gumbel-Softmax (Jang et al., 2017; Maddison et al., 2017) is explored in Doc2hash (Zhang and Zhu, 2019) to replace the ST estimator in NASH.",
"More recently, to better model the similarities between different documents, (Hansen et al., 2019) investigated the combination of generative models and ranking schemes to generate hash codes.",
"Different from the aforementioned generative semantic hashing methods, in this paper, we focus on how to incorporate correlations into the bits of hash codes.",
"Datasets Following previous works, we evaluate our model on three public benchmark datasets:",
"i) Reuters21578, which consists of 10788 documents with 90 categories;",
"ii) 20Newsgroups, which contains 18828 newsgroup posts from 20 different topics;",
"iii) TMC, which is a collection of 21519 documents categorized into 22 classes.",
"Training Details For the conveniences of comparisons, we use the same network architecture as that in NASH and BMSH.",
"Specifically, a 2-layer feed-forward neural network with 500 hidden units and a ReLU activation function is used as an inference network, which receives the TF-IDF of a document as input and outputs the mean and covariance matrix of the Gaussian random variables r .",
"During training, the dropout (Srivastava et al., 2014) is used to alleviate the overfitting issue, with the keeping probability selected from { 0.8, 0.9 } based on the performance on the validation set.",
"The Adam optimizer (Kingma and Ba, 2014) is used to train our model, with the learning rate set to 0.001 initially and then decayed for every 10000 iterations.",
"For all experiments on different datasets and lengths of hash codes, the rank v of matrix U is set to 10 and the number of component k in the distribution h k ( s ) is set to 10 consistently, although a systematic ablation study is conducted in Section 5.5 to investigate their impacts on the final performances.",
"Baselines The following unsupervised semantic hashing baselines are adopted for comparisons: Locality Sensitive Hashing (LSH) (Datar et al., 2004), Stack Restricted Boltzmann Machines (S-RBM) (Salakhutdinov and Hinton, 2009), Spectral Hashing (SpH) (Weiss et al., 2009), Self-Taught Hashing (STH) (Zhang et al., 2010), Variational Deep Semantic Hashing (VDSH) (Chaidaroon and Fang, 2017), Neural Architecture for Generative Semantic Hashing (NASH) (Shen et al., 2018), and Semantic Hashing model with a Bernoulli Mixture prior (BMSH)(Dong et al., 2019).",
"Evaluation Metrics The performance of our proposed approach is measured by retrieval precision i.e. , the ratio of the number of relevant documents to that of retrieved documents.",
"A retrieved document is said to be relevant if its label is the same as that of the query one.",
"Specifically, during the evaluating phase, we first pick out top 100 most similar documents for each query document according to the hamming distances of their hash codes, from which the precision is calculated.",
"The precisions averaged over all query documents are reported as the final performance.",
"The retrieval precisions on datasets TMC, Reuters and 20Newsgroups are reported in Tables 1, 2 and 3, respectively, under different lengths of hash codes.",
"Compared to the generative hashing method NASH without considering correlations, we can see that the proposed method, which introduces correlations among bits by simply employing the distribution of Boltzmann machine as the posterior, performs significantly better on all the three datasets considered.",
"This strongly corroborates the benefits of taking correlations into account when learning the hash codes.",
"From the tables, we can also observe that the proposed model even outperforms the BMSH, an enhanced variant of NASH that employs more complicated mixture distributions as a prior.",
"Since only the simplest prior is used in the proposed model, larger performance gains can be expected if mixture priors are used as in BMSH.",
"Notably, a recent work named RBSH is proposed in (Hansen et al., 2019), which improves NASH by specifically ranking the documents according to their similarities.",
"However, since it employs a different data preprocessing technique as the existing works, we cannot include its results for a direct comparison here.",
"Nevertheless, we trained our model on their preprocessed datasets and find that our method still outperforms it.",
"For details about the results, please refer to Appendix A.4.",
"Moreover, when examining the retrieval performance of hash codes under different lengths, it is observed that the performance of our proposed method never deteriorates as the code length increases, while other models start to perform poorly after the length of codes reaching a certain level.",
"For the most comparable methods like VDSH, NASH and BMSH, it can be seen that the performance of 128 bits is generally much worse than that of 64 bits.",
"This phenomenon is illustrated more clearly in Figure 1. This may attribute to the reason that for hash codes without correlations, the number of codes will increase exponentially as the code length increases.",
"Because the code space is too large, the probability of assigning similar items Method 8 bits 16 bits 32 bits 64 bits 128 bits LSH 0.4388 0.4393 0.4514 0.4553 0.4773 S-RBM 0.4846 0.5108 0.5166 0.5190 0.5137 SpH 0.5807 0.6055 0.6281 0.6143 0.5891 STH 0.3723 0.3947 0.4105 0.4181 0.4123 VDSH 0.4330 0.6853 0.7108 0.4410 0.5847 NASH 0.5849 0.6573 0.6921 0.6548 0.5998 BMSH n.a. 0.7062 0.7481 0.7519 0.7450 Ours 0.6959 0.7243 0.7534 0.7606 0.7632 Table 1: Precision of the top 100 retrieved documents on TMC dataset.",
"to nearby binary codes may decrease significantly.",
"But for the proposed model, since the bits of hash codes are correlated to each other, the effective number of codes can be determined by the strength of correlations among bits, effectively restricting the size of code space.",
"Therefore, even though the code length increases continually, the performance of our proposed model does not deteriorate.",
"To show the computational efficiency of our proposed method, we also report the average running time per epoch in GPU on TMC dataset, which is of the largest among the considered ones, in Table 4.",
"As a benchmark, the average training time of vanilla NASH is 2 .",
"553 s per epoch.",
"It can be seen that because of to the use of low-rank parameterization of the covariance matrix, the proposed model can be trained almost as efficiently as vanilla NASH, but deliver a much better performance.",
"To further investigate the capability of different models in generating semantic-preserving binary codes, we project the hash codes produced by VDSH, NASH and our proposed model on 20Newsgroups datasets onto a two-dimensional plane by using the widely adopted UMAP technique (McInnes",
"et al., 2018) and then visualize them on the two-dimensional planes, as shown in Figure 2. It can be seen that the hash codes produced by VDSH are quite mixed for documents from different categories, while those produced by NASH are more distinguishable, consistent with the hypothesis that NASH is able to produce better codes than VDSH thanks to the end-to-end training.",
"From the figure, we can further observe that the hash codes produced by our proposed method are the most distinguishable among all three methods considered, corroborating the benefits of introducing correlations among the bits of hash codes.",
"Ranks v Low-rank perturbed covariance matrix enables the proposed model to trade-off between complexity and performance.",
"That is, larger v allows the model to capture more dependencies among latent variables, but the required computational complexity also increases.",
"To investigate its impacts, we evaluate the performance of the 64-bit hash codes obtained from the proposed model under different values of v , with the other key parameter k fixed to 10.",
"The result is listed in the left half of Table 5.",
"Notably, the proposed model with v = 0 is equivalent to NASH since there is not any correlation between the binary random variables.",
"It can be seen that as the number of ranks Value of v Precision Value of k Precision 0 0.7812 1 0.8300 1 0.8353 3 0.8391 5 0.8406 5 0.8395 10 0.8465 10 0.8465 Table 5: Left: Retrieval precisions under different values of v with k fixed to be 10 on Reuters dataset; Right: Retrieval precision under different values of k with v fixed to be 10 on Reuters dataset.",
"increases, the retrieval precisions also increase, justifying the hypothesis that employing the posteriors with correlations can increase the model's representational capacity and thereby improves the hash codes' quality in turn.",
"It is worth noting that the most significant performance improvement is observed between the models with v = 0 and v = 1 , and then as the value of v continues to increase, the improvement becomes relatively small.",
"This indicates that it is feasible to set the v to a relatively small value to save computational resources while retaining competitive performance.",
"The number of mixture components k As stated in Section 3.3, increasing the number of components k in the mixture distribution h k ( s ) will reduce the gap between the lower bound (cid:101) L k and the ELBO L .",
"To investigate the impacts of k , the retrieval precisions of the proposed model are evaluated under different values of k , while setting the other key parameter v = 10 .",
"It can be seen from the right half of Table 5 that as the number of components k increases, the retrieval precision also increases gradually, suggesting that a tighter lower bound (cid:101) L k can always indicate better hash codes.",
"Hence, if more mixture components are used, better hash codes can be expected.",
"Due to the sake of complexity, only 10 components are used at most in the experiments.",
"In this paper, by employing the distribution of Boltzmann machine as the posterior, we show that correlations can be efficiently introduced into the bits.",
"To facilitate training, we first show that the BM distribution can be augmented as a hierarchical concatenation of a Gaussian-like distribution and a Bernoulli distribution.",
"Then, an asymptotically-exact lower bound of ELBO is further developed to tackle the tricky normalization term in Boltzmann machines.",
"Significant performance gains are observed in the experiments after introducing correlations into the bits of hash codes.",
"This work is supported by the National Natural Science Foundation of China (NSFC) (No. 61806223, U1711262, U1501252, U1611264, U1711261), National Key R&D Program of China (No. 2018YFB1004404), and Fundamental Research Funds for the Central Universities (No. 191gjc04).",
"Also, CC appreciates the support from Yahoo! Research."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"other",
"other"
] |
[
"The stance detection task aims at detecting the stance of a tweet or a text for a target.",
"These targets can be named entities or free-form sentences (claims).",
"Though the task involves reasoning of the tweet with respect to a target, we find that it is possible to achieve high accuracy on several publicly available Twitter stance detection datasets without looking at the target sentence.",
"Specifically, a simple tweet classification model achieved human-level performance on the WTWT dataset and more than two-third accuracy on various other datasets.",
"We investigate the existence of biases in such datasets to find the potential spurious correlations of sentiment-stance relations and lexical choice associated with the stance category.",
"Furthermore, we propose a new large dataset free of such biases and demonstrate its aptness on the existing stance detection systems.",
"Our empirical findings show much scope for research on the stance detection task and proposes several considerations for creating future stance detection datasets.",
"1 1 Introduction Stance detection is a vital sub-task for fake news detection (Pomerleau and Rao, 2017), automated fact checking (Vlachos and Riedel, 2014; Ferreira and Vlachos, 2016), social media analysis (Zhang et al., 2017), analyzing online debates (Bar-Haim et al., 2017) and rumour verification (Derczynski et al., 2017; Gorrell et al., 2019).",
"Furthermore, it is also an essential measure for progress in Natural Language Understanding, especially in the noisy-text domain.",
"Over the recent years, several stance detection datasets have been proposed.",
"These datasets, in turn, facilitated progress in stance detection research, with some systems achieving up to 93.7% accuracy (Dulhanty et al., 2019).",
"However, most 1 Code: https://github.com/Ayushk4/bias-stance Dataset: https://github.com/Ayushk4/stance-dataset Analysts: Aetna-Humana Deal still Probable, Anthem-Cigna Unlikely @InsuranceNewsNet <URL> 11:03 AM Twitter for iPhone [Aetna-Humana] Support [Anthem-Cigna] Refute Figure 1: An example tweet from WTWT dataset with different targets.",
"of these state-of-the-art systems are complex deep neural networks, making them difficult to interpret.",
"Lack of explainability raises concern since previous works (Gururangan et al., 2018; Goyal et al., 2017; Cirik et al., 2018; Geva et al., 2019) on other tasks demonstrated that superficial dataset biases could result in inflated test-set performance.",
"With this motivation, we carry out the first study analyzing several publicly available Twitter stance detection datasets.",
"Our experiments reveal rampant biases in datasets through which even target-oblivious models can achieve impressive performance.",
"Various existing works have hinted at the presence of such dataset biases.",
"For example, TAN model (Du et al., 2017) is a very competitive stance detection model.However, Ghosh et al. (2019) recently proved that TAN does not take advantage of target information at all.",
"In RumourEval-2017 (Derczynski et al., 2017), models delivered up to 0.74 accuracy without any knowledge of the target, being only short of 0.004 from the best model considering the context.",
"Similarly, in RumourEval2019 (Gorrell et al., 2019), the runner-up model (Fajcik et al., 2019) observed a 0.43 decrease in accuracy by considering the target information.",
"Schiller et al. (2020) discovered that stance detection models are prone to adversarial attacks of paraphrasing, spelling error and negation similar to other NLP tasks (Ribeiro et al., 2020).",
"However, Target Tweets Dataset size Number Domain Type # Stance Unique Scrapped DT/T WTWT 51284 5 Finance (M&A) fixed 4 50210 45865 2% SE16 4162 5 Various fixed 3 4162 0% M-T 4455 3 Political fixed (pairs) 3 4413 2688 0.9% RE17 5568 -Rumour-claims free-form 4 556k 0% RE19 8574 -Rumour-claims free-form 4 8574 0% Encryption 2999 1 Encryption-debate fixed 3 2522 1634 0% Table 1: Statistics of the Twitter stance detection datasets considered.",
"Target plays a crucial role in deciding stance.",
"Consider the example in Figure 1.",
"Here, the tweet stance varies for the two targets.",
"The existing datasets have very few examples with different target labels.",
"Models can pick up on pseudo signals in the tweet content and shortcut the task without looking at the targets.",
"These signals or biases occur due to inherent biases in our language and human nature.",
"For example, certain lexical choices can correlate with their respective stance classes.",
"Upon discovering and studying such correlations, we augment the WTWT dataset addressing these issues and re-evaluate the stance detection systems.",
"We make the following contributions.",
"We empirically demonstrate biases across a variety of Twitter stance detection datasets and carry out a detailed analysis of these datasets.",
"Consequently, we propose a new large scale dataset free of such spurious cues and re-evaluate the stance detection systems to show the usefulness of this dataset.",
"We first discuss the datasets considered (Section 2.1), followed by our experiments (Section 2.2) and analysis (Section 2.3).",
"We consider a wide variety of publicly available Twitter stance detection datasets including cross-target, multi-target, rumour-claim variants of stance detection.",
"These datasets have a diverse set of targets ranging from free-form sentences to fixed target entities.",
"Over the past few years, several more variants of this task have been proposed, such as in non-English language (Darwish et al., 2017; Kk and Can, 2018; Lai et al., 2018) and multi-lingual settings (Zotova et al., 2020; Vamvas and Sen-nrich, 2020), different learning paradigms of unsupervised (Darwish et al., 2019) semi-supervised (Mohammad et al., 2016b), zero-shot (Allaway and McKeown, 2020) and non-Twitter tasks of debate-argument stance (Bar-Haim et al., 2017) and headline-body stance detection (Pomerleau and Rao, 2017).",
"Here, however, we only study the English Twitter stance detection tasks in fully supervised learning settings.",
"Specifically, we consider 6 datasets WTWT (Conforti et al., 2020), SE16 (task-A) (Mohammad et al., 2016b,a) M-T (Sobhani et al., 2017), RE17 (Derczynski et al., 2017), RE19 (Gor-rell et al., 2019) and Encryption (Addawood et al., 2017) with their statistics mentioned in Table 7.",
"This table also reports the percentage of tweets in the entire dataset labelled for different targets (DT) given by DT/T in the last column.",
"We can see that these datasets have very few tweets annotated for different targets.",
"The M-T dataset's targets are a pair of politicians, and for each of its tweet-targets, the label is a pair of stances.",
"We formulate detecting these two stance-pair as separate tasks for the experiments in the following section.",
"Method: Given a tuple ( tweet, target, stance ) , a target-oblivious classifier f ( tweet ) stance is trained in a supervised setting.",
"It is expected that such a classifier would generalize poorly for an unbiased dataset.",
"We set this target-oblivious classifier as the standard Bert classifier (Devlin et al., 2019) pre-trained on Tweets (Nguyen et al., 2020).",
"It receives the input [ CLS ] tweet [ SEP ] \". Additionally, we train a strong target-aware Bert classifier model for stance detection (Ghosh et al., 2019). This model takes input [ CLS ] tweet [ SEP ] target [ SEP ] \".",
"We use PyTorch (Paszke et al., 2019), HuggingFace (Wolf et al., 2019), Wandb (Biewald, 2020) and Scikit-Learn (Pedregosa et al., 2011) for our experiments.",
"We use Adam optimizer (Kingma and Ba, 2014).",
"We elaborate the full experimental settings in the appendix A.",
"F 1 Macro across healthcare merger operations Entertainment Models CVS_AET CI_ESRX ANTM_CI AET_HUM avgF 1 avg w F 1 F 1 Macro Bert (no-target) 0.673 0.703 0.745 0.759 0.720 0.720 0.347 Human Upperbound 0.753 0.712 0.744 0.737 0.736 0.743 N/A Bert (with target) 0.668 0.709 0.746 0.756 0.720 0.719 0.433 Random guessing 0.222 0.237 0.231 0.236 0.230 0.232 0.201 Majority guessing 0.162 0.139 0.155 0.134 0.151 0.148 0.161 Table 2: Results on WTWT dataset (Conforti et al., 2020).",
"Results and Discussion: The WTWT dataset is a cross-target dataset containing four in-domain (healthcare) and one out-of-domain (entertainment) target.",
"For the in-domain evaluation, training is done on three health mergers while testing is done on the fourth unseen target.",
"For out of domain, training is on all four health mergers and testing on the entertainment domain.",
"Table 2 shows the performance of target-oblivious Bert, target-aware Bert and the human upper-bound.",
"The human expert upper-bound values were taken from the WTWT dataset.",
"We observe that the target-oblivious model consistently performs very close to the target-aware model for all the targets.",
"Both these models achieve near-human performance overall on in-domain targets.",
"The target-oblivious Bert surpasses human upper-bound for two mergers individually.",
"Such a feat is alarming, especially because cross-target stance is a more challenging variant (Kk and Can, 2020; Wang et al., 2020) of the task.",
"Results on the other datasets are shown in Table 3.",
"We compare these results with random guessing, predicting majority class and the target-aware Bert.",
"Additionally, RE17, R19, and Encryption datasets are heavily skewed datasets, so Macro-F1 is the proposed metric (Gorrell et al., 2019).",
"The target-oblivious Bert delivers more than two-third classification accuracy consistently across all these datasets.",
"This model achieves impressive performance for all metrics in SE16 and M-T datasets, while also performing significantly above majority class for datasets with skewed distributions on the Macro-F1 metric.",
"The performance delivered by target oblivious Bert is also very close to the target-aware Bert model on every metric.",
"These surprising numbers across all the datasets indicates the presence of spurious cues that encourages the models to bypass the need for looking at the target.",
"After the finding from our previous section, we sought to discover the form in which spurious cues exists and use those findings to create a new dataset.",
"We mainly consider the largest and most recent dataset, WTWT for analysis.",
"We first discuss target-independent lexical choices associated with stance, followed by target-independent sentiment-stance correlations.",
"between tweet and stance following the exact same procedure as (Gururangan et al., 2018) after removing stopwords.",
"Table 4 shows that top 5 stance-wise words along with the fraction of tweets containing those words.",
"We observer that certain groups of target-independent lexicons are highly correlated with stances in some cases occurring in more 29% of the tweet.",
"For Support and Refute classes respectively, we find the co-occurrence of indicative words for the status of merger, such as approves' or blocks'.",
"The Comments relating to these health companies' mergers often talk about its impact, leading to the choice of lexicons containing words like healthcare' and mean' with this stance.",
"Similarly, Unrelated tweets often talk about things related to the companies but unrelated to the merger operation such as stocks' or bids'.",
"Sentiment-stance correlation: Stance detection differs from the sentiment analysis task (Moham-mad et al., 2016b).",
"However, we observe a strong correlation of sentiment with stance.",
"Formally, we obtain a sentiment score between 0 (negative) and 1 (positive) for each tweet using XLNet model (Yang et al., 2019) trained on SST (Socher et al., 2013; Pang and Lee, 2005) and Imdb (Maas et al., 2011).",
"The average sentiment scores of these tweets across Support, Refute, Comment and Unrelated stances were found to be 0.237, 0.657, 0.492 and 0.485 respectively, while their variance were 0.087, 0.056, 0.110, 0.108.",
"The tweets with Support and Refute stance have strong negative or positive sentiment on average while for the other two is it neutral on average but having a high variance.",
"These serve as strong evidences for stance-sentiment correlations.",
"such cues in the remaining datasets, varying with their domains.",
"For example, RE19 has a question mark in more than 75% query' stance tweets, while it is present only in 11% of the entire remaining dataset.",
"Similarly 75% of tweets with deny' stance have highly negative sentiment of less than 0.1 score.",
"In SE16 dataset, had 91.4% of tweets without any opinion 2 had None' stance despite the stance detection task being different from opinion mining task (Mohammad et al., 2016b).",
"With the understanding from the previous section, we propose a new stance detection dataset on which target-unaware models will not perform well.",
"We use the following reasoning for creating the new dataset.",
"If a tweet in the dataset has different stances depending on different targets, then simple tweet classification models will not be able to perform well.",
"Thus we attempt to increase DT/T ratio from Table 7.",
"Formally, we take the WTWT dataset, which is the largest dataset of its kind, with high-quality experts labels of 0.88 Cohen (Co-hen, 1960), and generate new (tweet, target, stance) triplets in three ways.",
"First , we attempt to remove the sentiment-stance correlation by making the stance-wise average sentiment neutral.",
"The WTWT dataset has 5 targets, one target for each merger.",
"We introduce 5 new additional targets which are negations of the original ones.",
"Formally, if the tweet has a Support (Refute) stance to the target CV S _ AET , then its stance to the negated target NEG _ CV S _ AET will be inverted to Refute (Support).",
"This is done only 2 Tweets have gold labels for opinion-class in the dataset.",
"for the two stance classes with non-neutral average sentiment score.",
"Introducing such negated targets reduces their sentiment to near neutral.",
"Second , we remove lexicon-stance correlations by creating multiple targets with different stances for each tweet.",
"Formally, for each tweet t with only one labelled target tgt , if the tweet-target pair ( t, tgt ) has the stance (cid:54) = Unrelated', then pick a target tgt (cid:48) where tgt (cid:48) (cid:54) = tgt and add the tuple ( t, tgt (cid:48) , Unrelated ) to the dataset.",
"Due to WTWT data collection and annotation procedure, this will not generate any wrong labels.",
"This augmentation reduces the lexicon-stance correlations, by having similar sets of lexicons introduced for different stances.",
"Hence, it guarantees target-oblivious shortcuts to result in poor performance.",
"Last , we balance the target-wise class-distributions.",
"For the tuples with Comment' and Unrelated' stances, we create a new tuple with inverted target (same as the first step) for 50% and 75% such examples randomly.",
"The resulting dataset contains 111596 tweet-target pairs each belonging to a stance class.",
"Each merger has at least 10000 data points.",
"The class distribution is also somewhat balanced with more than 10k examples for the least occurring class.",
"Among the tweet-target pairs, the pairs classified as Support, Refute, Comment and Unrelated are distributed in the ratio 1:1:3:5 approximately, having a similar distribution to the WTWT dataset.",
"We propose a similar cross-target evaluation setting for tWTWT as WTWT.",
"For the in domain (health) mergers, we train on three health merger (total six targets including negated target for each merger) and test on the fourth health merger.",
"For the out-of-domain evaluation, we train on the eight targets corresponding to the 4 health mergers and test on the two targets for entertainment merger.",
"target), target-oblivious Bert from 2.2, along with the two strongest baselines from the WTWT paper SiamNet (Santosh et al., 2019) and TAN (Du et al., 2017).",
"For SiamNet and TAN models, we replace the Glove (Pennington et al., 2014) and LSTM (Hochreiter and Schmidhuber, 1997) features with better features from Bert.",
"Table 5 shows the performance of these models.",
"Bert (no-target) gives very low performance, showing that target oblivious models perform poorly on this dataset.",
"Similarly, TAN which has been proven to not at take advantage of the target information (Ghosh et al., 2019) also performs very poorly on the dataset.",
"The target aware Bert offers a competitive performance still being only at 0.51 F1 score.",
"SiamNet follows next at 0.31 F1 score.",
"Both these models have their performance reduced significantly from WTWT dataset.",
"In this paper we demonstrated the presence of biases across several Twitter stance detection datasets, which aid simple tweet classifiers to achieve impressive performance.",
"We carried out an investigation for presence of bias for the WTWT dataset and found correlations of stance-class with sentiment and lexical choice.",
"Consequently, we proposed a new bias-free stance detection dataset tWTWT, the largest of its kind.",
"Evaluation of our baselines on this new dataset demonstrates scope for future research on stance detection.",
"The observations are also crucial for the creation of new stance detection datasets.",
"Our future work includes analysing multilingual datasets and exploring explainable target aware stance detection models.",
"We would like to thank the anonymous reviewers for their valuable feedback and suggestions.",
"We also thank the Computer Science and Engineering Department at the IIT Kharagpur for providing us with the compute facilities for our research."
] | [
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"objective",
"abstain",
"objective",
"other",
"other"
] |
[
"Multilingual models have demonstrated impressive cross-lingual transfer performance.",
"However, test sets like XNLI are monolingual at the example level.",
"In multilingual communities, it is common for polyglots to code-mix when conversing with each other.",
"Inspired by this phenomenon, we present two strong black-box adversarial attacks (one word-level, one phrase-level) for multilingual models that push their ability to handle code-mixed sentences to the limit.",
"The former uses bilingual dictionaries to propose perturbations and translations of the clean example for sense disambiguation.",
"The latter directly aligns the clean example with its translations before extracting phrases as perturbations.",
"Our phrase-level attack has a success rate of 89.75% against XLM-R large , bringing its average accuracy of 79.85 down to 8.18 on XNLI.",
"Finally, we propose an efficient adversarial training scheme that trains in the same number of steps as the original model and show that it improves model accuracy.",
"1 1 Introduction The past year has seen incredible breakthroughs in cross-lingual generalization with the advent of massive multilingual models that aim to learn universal language representations (Pires et al., 2019; Wu and Dredze, 2019; Conneau et al., 2020b).",
"These models have demonstrated impressive cross-lingual transfer abilities: simply fine-tuning them on task data from a high resource language such as English after pretraining on monolingual corpora was sufficient to manifest such abilities.",
"This was observed even for languages with different scripts and no vocabulary overlap (K et al., 2020).",
"However, transferring from one language to another is insufficient for NLP systems to understand multilingual speakers in an increasingly multilingual world (Aronin and Singleton, 2008).",
"In many 1 Code: github.com/salesforce/adversarial-polyglots",
"(a) Aligned words across sentences",
"(b) Extracted candidate perturbations",
"(c) Final multilingual adversary Figure 1: BUMBLEBEE 's three key stages of adversary generation:",
"multilingual societies (e.g., Singapore, Papua New Guinea, etc.), it is common for multilingual interlocutors to produce sentences by mixing words, phrases, and even grammatical structures from the languages in their repertoires (Matras and Sakel, 2007).",
"This is known as code-mixing (Poplack et al., 1988), a phenomenon common in casual conversational environments such as social media and text messages.",
"2 Hence, it is crucial for NLP systems serving multilingual communities to be robust to code-mixing if they are to understand and establish rapport with their users (Tay, 1989; Bawa et al., 2020) or defend against adversarial polyglots.",
"Although gold standard data (Bali et al., 2014; Patwa et al., 2020) is important for definitively evaluating code-mixed text processing ability, such datasets are expensive to collect and annotate.",
"The 2 Examples of real code-mixing in Appendix A. dizzying range of potential language combinations further compounds the immensity of such an effort.",
"We posit that performance on appropriately crafted adversaries could act as a lower bound of a model's ability to generalize to the distribution simulated by said adversaries, an idea akin to worst-case analysis (Divekar, 1984).",
"For example, Tan et al. (2020b) showed that an NLP system that was robust to morphological adversaries was less perplexed by dialectal text exhibiting morphological variation.",
"Likewise, if a system is robust to code-mixed adversaries constructed from some set of languages, it is reasonable to expect it to also perform better on real code-mixed text in those languages.",
"While they may not fully model the intricacies of real code-mixing (Sridhar and Sridhar, 1980), we believe that they can be useful in the absence of appropriate evaluation data.",
"Hence, we: Propose two strong black-box adversarial attacks targeting the cross-lingual generalization ability of massive multilingual representations (Fig. 1), demonstrating their effectiveness on state-of-the-art models for natural language inference and question answering.",
"To our knowledge, these are the first two multilingual adversarial attacks.",
"Propose an efficient adversarial training scheme that takes the same number of steps as standard supervised training and show that it creates more language-invariant representations, improving accuracy in the absence of lexical overlap.",
"Multilingual classifiers.",
"Low resource languages often lack support due to the high cost of annotating data for supervised learning.",
"An approach to tackle this challenge is to build cross-lingual representations that only need to be trained on task data from a high resource language to perform well on another under-resourced language (Klementiev et al., 2012).",
"Artetxe and Schwenk (2019) presented the first general purpose multilingual representation using a BiLSTM encoder.",
"Following the success of Transformer models (Vaswani et al., 2017), recent multilingual models like mBERT (De-vlin et al., 2019), Unicoder (Huang et al., 2019), and XLM-R (Conneau et al., 2020a) take the pretraining fine-tuning paradigm into the multilingual realm by pretraining Transformer encoders on unlabeled monolingual corpora with various language modeling objectives before fine-tuning them on task data from a high-resource language such as English.",
"Code-mixed text processing.",
"Previous research on code-mixed text processing focused on constructing formal grammars (Joshi, 1982) and token-level language identification (Bali et al., 2014; Solorio et al., 2014; Barman et al., 2014), before progressing to named entity recognition and part-of-speech tagging (Ball and Garrette, 2018; AlGhamdi and Diab, 2019; Aguilar and Solorio, 2020).",
"Recent work explores code-mixing in higher-level tasks such as question answering and task-oriented dialogue (Chandu et al., 2019; Ahn et al., 2020).",
"Muller et al. (2020) demonstrate mBERT's ability to transfer to an unseen dialect by exploiting its speakers' tendency to code-mix.",
"A key challenge of developing models that are robust to code-mixing is the availability of code-mixed datasets.",
"Hence, Winata et al. (2019) use a pointer-generator network to generate synthetically code-mixed sentences while Pratapa et al. (2018) explore the use of parse trees for the same purpose.",
"Yang et al. (2020) propose to improve machine translation with code-switching pretraining, replacing words with their translations in a similar manner to masked language modeling (Devlin et al., 2019).",
"These word pairs are constructed from monolingual corpora using cosine similarity.",
"Sitaram et al. (2019) provide a comprehensive survey of code-mixed language processing.",
"Word-level adversaries.",
"Modified inputs aimed at disrupting a model's predictions are known as adversarial examples (Szegedy et al., 2014).",
"In NLP, perturbations can be applied at the character, subword, word, phrase, or sentence levels.",
"Early word-level adversarial attacks (Ebrahimi et al., 2018; Blohm et al., 2018) made use of the target model's gradients to flip individual words to trick the model into making the wrong prediction.",
"However, while the perturbations were adversarial for the target model, perturbed word's original semantics was often not preserved.",
"This could result in the expected prediction changing and making the model appear more brittle than it actually is.",
"Later research addressed this by searching for adversarial rules (Ribeiro et al., 2018) or by constraining the candidate perturbations to the k nearest neighbors in the embedding space (Alzantot et al., 2018; Michel et al., 2019; Ren et al., 2019; Zhang et al., 2019; Li et al., 2019; Jin et al., 2020).",
"Zang Original P: The girl that can help me is all the way across town.",
"et al. (2020) take another approach by making use of a annotated sememes to disambiguate polysemous words, while Tan et al. (2020a) perturb only the words' morphology and encourage semantic preservation via a part-of-speech constraint.",
"Other approaches make use of language models to generate candidate perturbations (Garg and Ramakrish-nan, 2020; Han et al., 2020).",
"Wallace et al. (2019) find phrases that act as universally adversarial perturbations when prepended to clean inputs.",
"Zhang et al. (2020) provide a comprehensive survey.",
"Summary.",
"Existing work on pretrained multilingual models has highlighted their impressive zero-shot cross-lingual transfer ability, though some analyses (K et al., 2020) indicate this could be a result of exploiting lexical overlaps rather than an indication of true cross-lingual understanding.",
"Although language-agnosticity is commonly measured via cross-lingual retrieval tasks such as LAReQA (Roy et al., 2020) and similarity search (Artetxe and Schwenk, 2019), we offer a different perspective in this paper by operationalizing it as a model's ability to handle code-mixing.",
"Existing evaluations for code-mixed text processing focus on gold annotated data, but such datasets are (relatively) expensive to compile and face similar scarcity challenges as those for low-resource languages.",
"Existing word-/phrase-level adversarial attacks probing the limits of model robustness have largely focused on monolingual (English) inputs.",
"In contrast, our adversarial attacks are designed to test the robustness of multilingual models to adversarial code-mixers.",
"Finally, we propose an efficient adversarial training scheme to improve the robustness of said models to code-mixed adversaries.",
"Code-mixing is a phenomenon where a multilingual speaker mixes words, and even grammatical",
"rules, from different languages in a single sentence.",
"This is distinguished from code-switching, which occurs at the inter-sentential level (Kachru, 1978).",
"Extreme code-mixing.",
"Inspired by the proliferation of real-life code-mixing and polyglots, we propose POLYGLOSS and BUMBLEBEE , two multilingual adversarial attacks that adopt the persona of an adversarial code-mixer.",
"We focus on the lexical component of code-mixing, where some words in a sentence are substituted with their equivalents from another language in the interlocutor's repertoire.",
"Borrowed words fall into two categories, nonce borrowing and loanwords, though distinguishing between them is beyond the scope of this work.",
"Since most code-mixers are bilinguals, natural code-mixed sentences tend to be constructed from two languages, with one language determining the syntax of the overall sentence (Poplack et al., 1988).",
"However, in a world with an increasing number of multilingual societies, it is conceivable for code-mixing to occur between more than two languages (Tan, 1988).",
"We take this idea to the extreme to test multilingual representations for their robustness to such cross-lingual lexical variation.",
"Problem formulation.",
"Given a target multilingual model M , a clean example x with the label y , and a set of embedded languages L from which to borrow words, we aim to generate the adversarial example x (cid:48) that maximizes M 's loss.",
"Formally, x (cid:48) = arg max x c XL ( y, M ( x c )) , (1) where x c X is a candidate adversary generated by perturbing x , M is a task-specific neural model, and L ( ) is the model's loss function.",
"To obtain a code-mixed adversary, we first generate candidate adversaries by substituting words in the",
"clean example with their equivalents from another language.",
"These substitutions/perturbations can be generated by via machine translation or mined from bilingual dictionaries.",
"Following Myers-Scotton (1997), we will refer to the original example's language as the matrix language and the perturbation's language as the embedded language.",
"Next, we perform beam search on the candidates to find the adversary that maximizes the target model's loss in a black-box manner (Alg. 2 in Appendix B.1).",
"In our implementation, we also keep track of successful adversaries and return the ones with the highest and lowest losses.",
"The former is a stronger adversary, while the latter often has fewer perturbations.",
"More details are in Appendix B.1.",
"Orthographic preservation.",
"When the embedded language uses a different script from the matrix language, code-mixers tend to transliterate borrowed words into the same script (Abuhakema, 2013; Bali et al., 2014).",
"This still poses a significant challenge to multilingual models (Khanuja et al., 2020).",
"We generally preserve the embedded language's script where possible to avoid unfairly penalizing the target model since there is often no standard way of transliterating words.",
"Scalable sense disambiguation.",
"Due to the polysemous nature of many words, translating the right sense is crucial to preserving the word's (and sen-tence's) semantics.",
"Common word sense disambiguation methods (Agirre and Edmonds, 2007) use a sense tagger trained on an annotated sense inventory such as WordNet (Miller, 1995).",
"However, this approach requires individual taggers and sense inventories for each matrix and embedded language, making it a serious challenge to extend POLYGLOSS to low-resource languages.",
"Instead, we propose to filter candidate perturbations using the embedded language translation of the clean example.",
"This is easily done by checking if the candidate perturbation exists in the translation.",
"Since our examples tend to be single sentences, the probability of different senses of the same word occurring in a single sentence is generally low (Conneau et al., 2018; Popel et al., 2020).",
"This approach only requires a machine translation (MT) system and no extra linguistic information, making it highly scalable as long as a supervised (or unsupervised) machine translation system is available.",
"By using gold translations instead of machine translations, it is even possible to mostly Algorithm 1 BUMBLEBEE Require: Clean example-label pair ( x, y ) , Target Model M , Embedded languages L Ensure: Adversarial example x (cid:48) T TRANSLATE ( x, target-languages = L ) L x GETLOSS ( M , x, y ) B { ( L x , x, 0) } (cid:46) Initialize beam P ALIGNANDEXTRACTPHRASES ( x, T ) while NOTEMPTY ( B ) do L x c , x c , i POLL ( B ) C GETCANDIDATES ( x c , P [ i ]) L GETLOSS ( M , C, y ) (cid:46) Losses for C i i + 1 UPDATEBEAM ( B, L , C, i ) end while x (cid:48) POLL ( B ) return x (cid:48) guarantee semantic preservation at the word-level.",
"Although using bilingual dictionaries with our filtering method ensures that the semantics of a borrowed word matches the original, the dictionary's comprehensiveness determines the presence of sufficient candidate adversaries.",
"In addition, POLYGLOSS swaps words at the word level, which may hurt the naturalness of the resulting sentence since it is more common for code-mixers to borrow phrases than individual words (Abuhakema, 2013).",
"A solution to these issues is to replace phrases in the matrix sentence with their equivalents from the reference translations instead of using a dictionary lookup (Alg. 1).",
"A key advantage of this approach is its flexibility and scalability to more languages since it only requires parallel bitexts from the matrix and embedded languages.",
"With the advent of neural sequence-to-sequence models, such bitexts can be easily generated using publicly available MT models.",
"However, a key challenge for this approach is extracting the matrix-embedded phrase pairs from the clean example and its translation.",
"We follow common phrase-based machine translation methods and accomplish this by aligning the matrix and embedded sentences (Koehn, 2010).",
"Implementation details can be found in Appendix B.2.",
"Syntactic preservation.",
"To improve the adver-saries' naturalness, we impose an equivalence constraint (Poplack, 1980), preventing a perturbation from being applied if it is from the same language as the previous word and will disrupt the syntax of the current phrase if applied (Winata et al., 2019).",
"Such disruptions usually occur when borrowing words from languages with a different word order.",
"We first evaluate POLYGLOSS and BUMBLEBEE on XNLI (Conneau et al., 2018), then evaluate the stronger attack on XQuAD (Artetxe et al., 2020).",
"XNLI is a multilingual dataset for natural language inference (NLI) with parallel translations for each example in fifteen languages.",
"Each example comprises a premise, hypothesis, and a label with three possible classes: { contradiction, neutral, entailment } .",
"We construct two more datasets from XNLI: XNLI-13 and XNLI-32.",
"XNLI-13 comprises all XNLI languages except Swahili and Urdu due to the lack of suitable dictionaries for POLYGLOSS .",
"We then translate the English test set into eighteen other languages with MT systems to form XNLI-31, increasing the number of embedded languages POLYGLOSS can draw from.",
"XQuAD is a multilingual dataset for extractive question answering (QA) with parallel translations in eleven languages.",
"In the cross-lingual transfer setting, the models are trained on English data, MNLI (Williams et al., 2018) and SQuAD 1.1 (Rajpurkar et al., 2016), and tested on mulitlingual data, XNLI and XQuAD, respectively.",
"We perturb the premise and hypothesis for NLI and only the question for QA.",
"More experimental details can be found in Appendix D. Matrix language.",
"Although our attacks work with any language as the matrix language, we use English as the matrix language in our experiments Model Clean BUMBLEBEEXLM-R large 75.64 / 61.39 35.32 / 22.52 XLM-R base 68.90 / 53.50 17.95 / 10.33 mBERT base 64.66 / 49.47 20.66 / 11.68 Table 4: BUMBLEBEE results on XQuAD (F 1 /EM).",
"due to the availability of English T translation models and the prevalence of English as the matrix language in many code-mixing societies.",
"Models.",
"We conduct our experiments on three state-of-the-art massive multilingual encoder models: XLM-RoBERTa, mBERT, and Unicoder, each pretrained on more than 100 languages.",
"From Tables 2 and 3, we observe that all the models are significantly challenged by adversarial code-mixing, though XLM-R large is the most robust to both attacks, likely due to having more parameters.",
"However, even after filtering POLYGLOSS 's candidate perturbations by the gold translations in XNLI-13, we observe an average drop in accuracy of 80.01%, relative to the models' accuracy on the clean XNLI-13.",
"BUMBLEBEE induces even greater performance drops (average relative decrease of 90.96% on XNLI-13), likely due to its word aligner yielding more candidates than POLYGLOSS 's dictionary lookup.",
"Increasing the number of embedded languages POLYGLOSS can draw upon results in greater drops in model performance (average relative decrease in accuracy of 93.66% on XNLI-31).",
"BERTvs. XLM-based.",
"We notice that mBERT is more sensitive to intra-phrasal syntactic disruption than the XLM-based models.",
"mBERT is the most robust to BUMBLEBEE out of all the base models when the equivalence constraint is in place, yet is the least robust to POLYGLOSS .",
"However, the latter trend is replicated for BUMBLEBEE if we remove this constraint (Table 16 in Appendix G).",
"A possible explanation is that XLM-R and Unicoder were trained on monolingual CommonCrawl (CC) data, while mBERT was trained on multilingual Wikipedia, which could be considered as aligned at the article level since there are articles on the same topic in different languages.",
"Hence, it is possible that this helped to align the languages more accurately in the feature space but made it more sensitive to syntactic disruptions.",
"However, many other hyperparameters differ between the two that could have also influenced their robustness.",
"Hence, we leave a rigorous study of these factors to future work.",
"The higher performance of the XLM-based models on clean data can likely be attributed to the CC corpus being an order of magnitude larger than multilingual Wikipedia (Lauscher et al., 2020).",
"Candidate filtering.",
"In the unfiltered setting, it is impossible for POLYGLOSS to discriminate between valid and invalid senses for a given context.",
"Hence, a potential criticism is that the large difference in POLYGLOSS 's success rate between the filtered and unfiltered settings could be attributed to the inappropriate senses of polysemous words being chosen and disrupting the semantics of the sentence.",
"On the other hand, filtering perturbations with reference translations of the sentence shrinks the space of perturbations to 1 per language.",
"Due to the dictionaries' non-exhaustive nature, not every word in the matrix sentence has an entry in the dictionary to begin with, making this filtering step a significant reduction of the space of candidates.",
"To determine the likely cause of the accuracy difference between the filtered and unfiltered settings in XNLI-13, we increase the number of languages available to POLYGLOSS to thirty-one.",
"If the difference between the filtered and unfiltered settings were not due to a lack of sufficient candidates, we should observe only a minor difference between the filtered settings for both XNLI-13 and -31.",
"However, we observe a 69% drop for XLM-R large , indicating that the former accuracy difference is likely due to the reduced number of valid candidates.",
"Phrase-level adversaries.",
"In addition to generating more fluent sentences (Table 1), extracting the candidate perturbations directly from the translations does away with the need for sense disambiguation and increases the number of perturbations per example since it is not limited to a static dictionary.",
"The increased effectiveness of BUMBLEBEE compared to POLYGLOSS (1.13x) is further evidence that a key factor to the success of such adversarial attacks is the availability of sufficient candidates; increasing the dimensionality of the search space increases the probability that an adversarial example for the model exists (Goodfellow et al., 2015).",
"We also include a non-adversarial baseline (Rand.) by sampling candidates from a uniform distribution instead of searching for the worst-case perturbations.",
"Our results in Table 3 indicate that the worst-case performance of multilingual models on code-mixed data may be much lower than the scores reported on human-produced test sets since they were not Model Devanagari Transliterated (Latin) XLM-R large 61.35 41.97 XLM-R base 48.62 30.01 mBERT base 37.70 23.41 Unicoder base 49.34 30.00 Table 5: BUMBLEBEE results on XNLI en,hi using both Devanagari and Latin scripts.",
"created in a targeted, adversarial fashion.",
"Experiments on beam width and a proof of concept for fully unsupervised adversaries are in Appendix E. Transliteration.",
"Since real-life code-mixers often use a single script for the entire sentence, we now test the effect of transliteration on BUMBLEBEE 's success rate for the English + Hindi language pair.",
"We accomplish this by transliterating all candidates from Devanagari into Latin using the dictionaries released by Roark et al. (2020).",
"From Table 5, we see that transliteration significantly affects the robustness of all models, even the XLM-based ones which were pretrained on similar data.",
"XQuAD.",
"We observe that both XLM-R and mBERT are significantly challenged by BUMBLEBEE even though only the question was modified (Table 4).",
"We did not experiment on Unicoder to reduce carbon costs since its performance was almost identical to XLM-R base in our XNLI experiments.",
"POLYGLOSS or BUMBLEBEE ?",
"As expected, inspection of individual adversarial examples revealed that BUMBLEBEE generated more natural sentences than POLYGLOSS since the languages used within phrases were more consistent (Table 1).",
"However, incorrect alignments due to the word aligner's probabilistic nature could introduce occasional noise into the adversarial examples.",
"For example, we found the (en) to be often aligned with (zh) even though the former is an article and the latter a possessive.",
"We observe that the aligner performs better when the sentences have similar word orders (e.g., English-French vs. English-Chinese) and we can expect the adversaries generated in these settings to be more natural.",
"Hence, we recommend POLYGLOSS when greater preservation of word-level semantics is desired, and BUMBLEBEE when phrase-level perturbations are desired or bilingual dictionaries are unavailable.",
"Discussion.",
"K et al. (2020) noted significant performance drops in XNLI accuracy for mBERT when the premise and hypothesis were in different languages (Fake English vs. { Hindi, Russian, Spanish } ), theorizing this to be an effect of disrupting the model's reliance on lexical overlap.",
"Our experiments in 4 and 5 lend support to this hypothesis.",
"In Table 1, we see multiple examples where the prediction was flipped from contradic-tion to entailment simply by perturbing a few words.",
"If the models did not rely on lexical overlap but performed comparisons at the semantic level, such perturbations should not have severely impacted their performance.",
"Our results on QA also corroborate Lee et al. (2019)'s finding that models trained on SQuAD-style datasets exploit lexical overlap between the question and context.",
"Finally, we propose code-mixed adversarial training (CAT), an extension of the standard adversarial training paradigm (Goodfellow et al., 2015), to improve the robustness of multilingual models to adversarial polyglots.",
"In standard adversarial training, adversarial attacks are run on the training set to generate adversaries for training.",
"However, this makes adversarial training computationally expensive.",
"Hence, we take inspiration from Tan et al. (2020a)'s method of randomly sampling perturbations from an adversarial distribution and generate code-mixed perturbations using word alignment.",
"To generate the code-mixed adversarial training set X (cid:48) , we first compute the adversarial distribution P adv by enumerating the perturbations per embedded language in all successful adversaries (4).",
"Formally, P adv = { f i } i =1 ... | L | , where f i = l i (cid:80) | L | j =1 l j and L is the set of embedded languages.",
"Next, for each clean example x , we sample n languages from P adv before translating the example into the n languages and aligning the translations with x .",
"For sentence-pair classification tasks like NLI, we use a per-sentence n to further increase variation.",
"Intuitively, limiting n improves the example's naturalness and the algorithm's efficiency (the alignment is the most costly step).",
"We then extract phrases from the aligned sentences, yielding our candidate perturbations P .",
"Next, we sample a perturbation with probability from P for each phrase in x .",
"Reducing yields more natural sentences since they will be less perturbed.",
"Finally, we apply these perturbations to x , obtaining a CAT example x (cid:48) .",
"Doing this k times for all x in X and adding the result to X yields X (cid:48) (Alg. 3 in Appendix G).",
"In contrast to running the adversarial attack on the training set, sampling perturbations from a distribution does not guarantee that the resulting example will be adversarial to the model.",
"This issue can be mitigated by increasing the number of CAT examples observed during training.",
"However, this would increase the computational cost if we were to train the model for the same number of epochs.",
"Hence, we set k to one less than the number of epochs XLM-R base was fine-tuned for in 4 and train the model for one epoch on the adversarial training set.",
"This exposes the model to more variation in the same number of training steps.",
"Setting.",
"We conduct our experiments on NLI with XLM-R base with no loss of generality.",
"In 4, the model was trained for ten epochs.",
"Hence, we set k = 9 , n = 2 , = 0 .",
"5 for CAT and train all models for a similar number of steps (60k) with the same hyperparameters as 4.",
"We first test the models on the BUMBLEBEE adversaries generated in 4 before directly attacking the model.",
"Next, we construct more realistic settings by running BUMBLEBEE with only 1-2 embedded languages from standard XNLI, Swahili (sw), Hindi (hi), and Urdu (ur).",
"These languages were the lowest resourced in the pretraining data (Conneau et al., 2020a).",
"We also construct another non-adversarial test set from XNLI by randomly choosing hypotheses and premises from different languages (K et al., 2020).",
"Since the original examples are individually monolingual, this test set will reveal if a model is simply exploiting lexical overlap rather than comparing the underlying concepts.",
"Finally, we run BUMBLEBEE with embedded languages not seen during task-specific training and from a different family (Austronesian) from the XNLI languages, Filipino (tl) and Indonesian (id).",
"This zero-shot defense setting will reveal if CAT encourages the learning of more language-invariant representations, or is simply allowing the model to adapt to the adversarial distribution.",
"Baselines.",
"Since training on languages in the test set takes us out of the cross-lingual transfer setting, we train a translate-trainn baseline for a fair comparison.",
"In this setting, we train on every x and its translations in the n languages sampled in CAT, regardless of whether they contributed words to the final CAT examples.",
"We also include Ganin et al. (2016)'s domain adversarial neural network (DANN), which has been used for cross-lingual adaptation (Joty et al., 2017; Chen et al., 2018).",
"From Table 6, we observe that both training on fully translated data and on CAT examples improved accuracy on the non-adversarial test sets and robustness to code-mixed adversaries, compared to the cross-lingual transfer model that was only trained on English data.",
"Similar to K et al. (2020), we found that disrupting the models' reliance on lexical overlap (Clean DL ) hurt performance.",
"The drop was particularly significant for the cross-lingual transfer (8 points) and translate-trainn models (5.24 points).",
"On the other hand, our CAT model only suffered a 1.5-point drop, indicating that the former two models likely rely heavily on lexical overlap to make predictions, while our CAT model may be using deeper, more language-agnostic features.",
"Crucially, our CAT model achieves similar to better clean accuracy than the baselines, contrasting with prior work showing that adversarial training hurts clean accuracy (Tsipras et al., 2019).",
"Finally, our CAT model is > 1.7x more robust to adversaries constructed from all fifteen XNLI languages than the translate-trainn model.",
"Although DANN-type training improved robustness to the previous BUMBLEBEE adversaries, clean performance was significantly degraded and BUMBLEBEE was able to find even more damaging adversaries upon attacking the model directly.",
"When attacked with 1-2 embedded languages that were seen during training, CAT also yields significant improvements in robustness over the baselines: a > 7 point increase compared to translate-trainn and a > 19 point gain over the zero-shot transfer setting.",
"In the zero-shot defense setting, CAT shows a > 12-point gain over the zero-shot transfer model and a > 4.7-point gain over the translate-trainn model.",
"We believe these results to be due to CAT encouraging the learning of language-invariant representations by exposing the model to cross-lingual lexical variation and preventing the model from exploiting lexical overlaps.",
"To further understand the effect of various fine-tuning methods on XLM-R base , we visualize the <s> vector from the layer before the classification head using t-SNE (Linderman et al., 2019).",
"Here, all sentences from XNLI are passed through the representations individually.",
"If a representation were 100% language-invariant, we should expect t-SNE to be unable to separate individual languages into their own clusters.",
"Hence, the extent to which t-SNE is able to do so would indicate the amount of language-specific information in this last layer.",
"From Fig. 2a, we observe that for the crosslingual transfer model (4), t-SNE managed to organize the sentences from several languages (Chi-nese, Hindi, Thai, Urdu) into distinct clusters.",
"This indicates that a significant amount of language-specific information remains in the vector representations of sentences from these languages.",
"Visualizing the sequence-averaged embeddings makes this even clearer (Fig. 5 in Appendix G).",
"Hence, while XLM-R may be multilingual, it appears to be structured as a space of individual language subspaces as opposed to a mixed, or language-invariant space.",
"On the other hand, t-SNE was much less successful when given the representation trained with CAT (Fig. 2b).",
"Mixing multiple languages in the same sentence and showing the model multiple variants of the same sentence likely encourages the model to refine its representation such that all variants of the same sentence are represented similarly, resulting in a more language-invariant representation.",
"We acknowledge that our methods do not fully model real code-mixing since we do not learn the mixing patterns from real data and there are subtleties in real code-mixing we ignore for simplicity, e.g., accounting for the prestige of participating languages (Bhatia, 2011).",
"In addition, it is impossible to guarantee the semantic preservation of a sentence generated by BUMBLEBEE due to the word aligner's statistical nature, though we can expect more accurate alignments to improve semantic preservation.",
"Finally, while CAT improves robustness, there remains a significant gap between the robust and clean accuracies.",
"In line with recent work challenging the Anglocentricity of crosslingual models (Anastasopoulos and Neubig, 2020; Liang et al., 2020), a promising direction of future work lies in investigating how the choice of matrix language affects model robustness.",
"Ensuring that multilingual models are robust to both natural and adversarial code-mixing is important in today's increasingly multilingual world if they are to allow their target users to fully express themselves in human-machine conversations and to defend against adversarial users attempting to evade toxicity/misinformation detection systems.",
"To approximate a lower bound for model performance on lexical code-mixing, we propose two strong black-box multilingual adversarial attacks and demonstrate their effectiveness on state-of-the-art cross-lingual NLI and QA models.",
"The former generates perturbations from bilingual dictionaries and disambiguates between senses using sentence translations, while the latter generates perturbations by aligning sentences from different languages.",
"Next, we show that training on code-mixed data synthesized via word alignment improves clean and robust accuracy when models are prevented from exploiting lexical overlap without hurting clean accuracy.",
"Crucially, we achieve this in the same number of steps as standard supervised training.",
"Finally, we use t-SNE visualizations to show that multilingual models are not necessarily language-invariant and that our code-mixed adversarial training scheme encourages language-invariance.",
"Adversarial attacks and defenses are double-edged swords.",
"On one hand, adversarial examples expose the gaps in existing models and help to focus the research community's attention on flaws that need to be addressed before these models can be used reliably in noisy, real-world environments.",
"On the other, the same adversarial attacks can be used by malicious actors to bypass toxicity/misinformation detection systems.",
"Similarly, methods for improving adversarial robustness can be used to defend against malicious actors and improve robustness to natural noise or linguistic variation, yet they can also be used to strengthen automated censorship systems and limit freedom of speech.",
"For example, our adversarial attacks could be used both as a lower bound for model performance on naturally occurring code-mixed text and to bypass misinformation detection systems while preserving the message's intelligibility for multilingual speakers.",
"Our adversarial training method could be used to both improve machine understanding of code-mixers by making multilingual representations more language-invariant and suppress the freedom of speech of polyglots who could have been using code-mixing to evade censorship.",
"At the same time, technology strongly shapes our behavior (Reeves et al., 2019).",
"Consequently, given the centrality of code-switching/mixing to many polyglots' lived experiences (Duff, 2015) and the positive correlations between multilingualism, code-switching, and creativity (Leikin, 2013; Kharkhurin and Wei, 2015; Furst and Grin, 2018), we should ensure that the natural language technologies we build do not inhibit multilingual speakers from fully expressing themselves, e.g., by discouraging code-mixing due to non-understanding.",
"In addition, studies have found that aphasic polyglots code-mix more frequently than neurotypical polyglots to cope with word-retrieval difficulties (Goral et al., 2019), making it important for natural language technologies to be robust to code-mixing if they are to be inclusive.",
"Therefore, we include both adversary generation and defense methods to avoid tipping the balance too far in either direction.",
"We would like to thank Cynthia Siew (NUS Psy-chology), Greg Bennett and Kathy Baxter (Sales-force), Lav Varshney (UIUC Electrical and Computer Engineering), Min-Yen Kan (NUS Computer",
"Computer Science), and our anonymous reviewers for their invaluable feedback.",
"We are also grateful to Guangsen Wang, Mathieu Ravaut, Soujanya Lanka, and Tuyen Hoang for contributing manually code-mixed sentences and Bari M Saiful for pointers on replicating the XLM-R results.",
"Samson is supported by Salesforce and Singapore's Economic Development Board under its Industrial Postgraduate Programme."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"In hierarchical text classification, we perform a sequence of inference steps to predict the category of a document from top to bottom of a given class taxonomy.",
"Most of the studies have focused on developing novels neural network architectures to deal with the hierarchical structure, but we prefer to look for efficient ways to strengthen a baseline model.",
"We first define the task as a sequence-to-sequence problem.",
"Afterwards, we propose an auxiliary synthetic task of bottom-up-classification.",
"Then, from external dictionaries, we retrieve textual definitions for the classes of all the hierarchy's layers, and map them into the word vector space.",
"We use the class-definition embeddings as an additional input to condition the prediction of the next layer and in an adapted beam search.",
"Whereas the modified search did not provide large gains, the combination of the auxiliary task and the additional input of class-definitions significantly enhance the classification accuracy.",
"With our efficient approaches, we outperform previous studies, using a drastically reduced number of parameters, in two well-known English datasets.",
"Hierarchical text classification (HTC) aims to categorise a textual description within a set of labels that are organized in a structured class hierarchy (Silla and Freitas, 2011).",
"The task is perceived as a more challenging problem than flat text classification, since we need to consider the relationships of the nodes from different levels in the class taxonomy (Liu et al., 2019).",
"Both flat text classification and HTC have been tackled using traditional machine learning classifiers (Liu et al., 2005; Kim et al., 2006) or deep neural networks (Peng et al., 2018; Conneau et al., 2017).",
"Nevertheless, the majority of the latest approaches consider models with a large number of parameters that require extended training time.",
"In the flat-classification scenario, some studies have addressed the problem of efficiency by proposing methods that do not focus on the model architecture, but in external ways of improving the results (Joulin et al., 2017; Howard and Ruder, 2018).",
"However, the listed strategies are still underdeveloped for HTC, and the most recent and effective methods are still computationally expensive (Yang et al., 2019; Banerjee et al., 2019).",
"The described context opens our research question: How can we improve HTC at a lower computational cost?",
"Therefore, our focus and main contributions are: A robust model for HTC, with few parameters and short training time, that follows the paradigm of sequence-to-sequence learning.",
"The practical application of an auxiliary (and not expensive) task that strengthens the model capacity for prediction in a bottom-up scheme.",
"An exploration of strategies that take advantage of external information about textual definition of the classes.",
"We encode the definitions in the word vector space and use them in: (1) each prediction step and (2) an adapted beam search.",
"Hierarchical classification resembles a multi-label classification where there are hierarchical relationships between labels, i.",
"e., labels at lower levels are conditioned by labels at higher levels in the hierarchy.",
"For that reason, we differ from previous work and address the task as a sequence-to-sequence problem, where the encoder receives a textual description and the decoder generates a class at each step (from the highest to the lowest layer in the hier-archy).",
"Our baseline model thereafter is a sequence-to-sequence neural network (Sutskever et al., 2014) composed of: Embedding layer: To transform a word into a vector w i , where i { 1,...,N } and N is the number of tokens in the input document.",
"We use pre-trained word embeddings from Common Crawl (Grave et al., 2018) for the weights of this layer, and we do not fine-tune them during training time.",
"Encoder: It is a bidirectional GRU (Cho et al., 2014) unit that takes as input a sequence of word vectors and computes a hidden vector h i per each i time step of the sequence.",
"Attention layer: We employ the attention variant of Bahdanau et al. (2015), and generate a context vector a i for each encoder output h i .",
"Decoder: To use the context a i and hidden h i vectors to predict the c l j l jk class of the hierarchy, where j { 1,...,M } .",
"M is the number of levels in the class taxonomy, l j represents the j-th layer of the hierarchy, and l jk is the k -th class in level l j .",
"Similar to the encoder, we use a bidirectional GRU.",
"For an input sequence of words, the model predicts a sequence of classes.",
"Given the nature of recurrent neural networks, iterating over a sequence stores historical information.",
"Therefore, for the last output computation we could take the previous inputs into consideration.",
"Previous work in HTC (Kowsari et al., 2017; Sinha et al., 2018) usually starts by predicting the most general category (Parent node) and continues to a more specific class (Child nodes) each time.",
"However, by following the common approach, the prediction of the most specific classes will have a smaller impact than the more general ones when the error propagates.",
"In this way, it could be harder to learn the relationship of the last target class with the upper ones.",
"Inspired by reversing the order of words in the input sequence (Sutskever et al., 2014), we propose an auxiliary synthetic task that changes the order of the target class levels in the output sequence.",
"In other words, we go upward from the child nodes to the parent.",
"With the proposed task, the parent and child nodes will have a similar impact on the error propagation, and the network could learn more robust representations.",
"We analyze the potential of using textual definitions of classes for external knowledge integration.",
"For each class c l j l jk in any level l j of the hierarchy, we could obtain a raw text definition from an external dictionary to compute a vector representation cv , that from now on we call the class definition vector (CDV).",
"We thereafter use the CDV representations with the two following strategies.",
"For a given document D , we classify it among the target classes C = (c l 1 l 1 k ,...,c l M l Mk ) , where M is the number of layers in the taxonomy.",
"In our approach, we predict the highest-level class c l 1 l 1 k and then use its CDV representation cv l 1 l 1 k as an additional input (alongside the encoder outputs) to the attention layer for the prediction of the next level class c l 2 l 2 k .",
"We continue the process for all the layers of the class hierarchy.",
"Beam search is a search strategy commonly used in neural machine translation (Freitag and Al-Onaizan, 2017), but the algorithm can be used in any problem that involves word-by-word decoding.",
"We assess the impact of applying beam search in HTC, and introduce an adapted version that takes advantage of the computed CDV representations: T (cid:88) i =0 logP ( y i | x, y 1 , ..., y t 1 ) + CD ( z, y i ) (1) In each step of the decoding phase, we predict a class that belongs to the corresponding level of the class hierarchy.",
"Given a time step i , the beam search expands all the k (beam size) possible class candidates and sort them by their logarithmic probability.",
"In addition to the original calculation, we compute the cosine distance between the CDV of a class candidate and the average vector of the word embeddings from the textual description z that we want to classify (CD component in Equation 1).",
"We add the new term to the logarithmic probability of each class candidate, re-order them based on the new score, and preserve the topk candidates.",
"Our intuition behind the added component is similar to the shallow fusion in the decoder of a WOS DBpedia Number of documents 46,985 342,782 Classes in level 1 7 9 Classes in level 2 143 70 Classes in level 3 NA 219 Table 1: Information of WOS and DBPedia corpora neural machine translation system (Gulcehre et al., 2017).",
"Thus, the class-definition representation might introduce a bias in the decoding, and help to identify classes with similar scores in the classification model.",
"Datasets.",
"We test our model and proposed strategies in two well-known hierarchical text classification datasets previously used in the evaluation of state-of-the-art methods for English: Web of Science (WOS; Kowsari et al., 2017) and DBpedia (Sinha et al., 2018).",
"The former includes parent classes of scientific areas such as Biochemistry or Psychology, whereas the latter considers more general topics like Sports Season, Event or Work.",
"General information for both datasets is presented in Table 1.",
"Model, hyper-parameters and training.",
"We use the AllenNLP framework (Gardner et al., 2018) to implement our methods.",
"Our baseline consists of the model specified in 2.1.",
"For all experiments, we use 300 units in the hidden layer, 300 for embedding size, and a batch size of 100.",
"During training time, we employ Adam optimiser (Kingma and Ba, 2014) with default parameters ( 1 = 0 . 9 , 2 = 0 . 98 , = 10 9 ).",
"We also use a learning rate of 0.001, that is divided by ten after four consecutive epochs without improvements in the validation split.",
"Furthermore, we apply a dropout of 0.3 in the bidirectional GRU encoder-decoder, clip the gradient with 0.5, and train the model for 30 epochs.",
"For evaluation, we select the best model in the validation set of the 30 epochs concerning the accuracy metric.",
"For learning with the auxiliary task, we interleave the loss function between the main prediction task and the auxiliary task ( 2.2) every two epochs with the same learning rate.",
"We aim for both tasks to have equivalent relevance in the network training.",
"To compute the class-definition vectors, we extract the textual definitions using the Oxford Dictionaries API 1 .",
"We vectorize each token of the descriptions using pre-trained Common Crawl embeddings (the same as in the embedding layer) and average them.",
"For the beam search experiments, we employ a beam size (k) of five, and assess both the original and adapted strategies.",
"We note that the sequence-to-sequence baseline model use a beam size of one 2 .",
"Table 2 presents the average accuracy results of our experiments with each proposed method over the test set.",
"For all cases, we maintain the same architecture and hyper-parameters in order to estimate the impact of the auxiliary task, parent node conditioning, and the beam search variants independently.",
"Moreover, we examine the performance of the combination of our approaches 3 .",
"In the individual analysis, we observe that the parent node conditioning and the auxiliary task provides significant gains over the seq2seq baseline, which support our initial hypothesis about the relevance of the auxiliary loss and the information of the parent class.",
"Conversely, we note that the modified beam search strategy has the lowest gain of all the experiments in WOS, although it provides one of the best scores for DBpedia.",
"One potential reason is the new added term for the k -top candidates selection (see Eq. 1), as it strongly depends on the quality of the sentence representation.",
"The classes of WOS includes scientific areas that are usually more complex to define than the categories of the DBpedia database 4 .",
"We also notice that the accuracy increment is relatively higher for all experiments on the WOS corpus than on DBpedia.",
"A primary reason might be the number of documents in each dataset, as DBpedia contains almost seven times the number 1 https://developer.oxforddictionaries.com/ 2 In preliminary experiments, we considered a beam size of ten, but we did not note a significant improvement.",
"3 We tried all the possible combinations, but only report the ones that offer an improvement over the individual counterparts.",
"4 Averaging words vectors to generate a sentence embedding is an elemental approach.",
"Further work could explore the encoding of the class-definition embeddings directly from the training data, or to weight the scores of the classification model and the similarity score to balance the contribution of each term.",
"of documents of WOS.",
"If we have a large number of training samples, the architecture is capable of learning how to discriminate correctly between classes only with the original training data.",
"However, in less-resourced scenarios, our proposed approaches with external knowledge integration could achieve a high positive impact.",
"As our strategies are orthogonal and focus on different parts of the model architecture, we proceed to combine them and assess their joint performance.",
"In the case of WOS, we observe that every combination of strategies improves the single counterparts, and the best accuracy is achieved by the merge of the auxiliary task and PNC, but with an original beam search of size five.",
"Concerning DBpedia, most of the results are very close to each other, given the high accuracy provided since the seq2seq baseline.",
"However, we note the relevance of combining the PNC strategy with the original or modified beam search to increase the performance.",
"Finally, we compare our strategies to the best HTC models reported in previous studies (Kowsari et al., 2017; Sinha et al., 2018).",
"We then observe that the results of our methods are outstanding in terms of accuracy and number of parameters.",
"Moreover, the training time of each model takes around one hour (for the 30 epochs), and the proposed auxiliary task do not add any significant delay.",
"Most of the studies for flat text classification primarily focus on proposing a variety of novel neural architectures (Conneau et al., 2017; Zhang et al., 2015).",
"Other approaches involve a transfer learning step to take advantage of unlabelled data.",
"McCann et al. (2017) used the encoder unit of a neural machine translation model to provide context for other natural language processing models, while Howard and Ruder (2018) pre-trained a language model on a general-domain monolingual corpus and then fine-tuned it for text classification tasks.",
"In HTC, there are local or global strategies (Silla and Freitas, 2011).",
"The former exploits local information per layer of the taxonomy, whereas the latter addresses the task with a single model for all the classes and levels.",
"Neural models show excellent performance for both approaches (Kowsari et al., 2017; Sinha et al., 2018).",
"Furthermore, other studies focus on using transfer learning for introducing dependencies between parent and child categories (Banerjee et al., 2019) and deep reinforcement learning to consider hierarchy information during inference (Mao et al., 2019).",
"The incorporation of external information in neural models has offered potential in different tasks, such as in flat text classification.",
"By using categorical metadata of the target classes (Kim et al., 2019) and linguistic features at word-level (Mar-gatina et al., 2019), previous studies have notably improved flat-text classification at a moderate computational cost.",
"Besides, Liu et al. (2016) outperform several state-of-the-art classification baselines by employing multitask learning.",
"To our knowledge, the latter strategies are not explicitly exploited for HTC.",
"For this reason, our study focuses on the exploration and evaluation of methods that enable hierarchical classifiers to achieve an overall accuracy improvement with the least increasing complexity as possible.",
"We presented a bag of tricks to efficiently improve hierarchical text classification by adding an auxiliary task of reverse hierarchy prediction and integrating external knowledge (vectorized textual definitions of classes in a parent node conditioning scheme and in the beam search).",
"Our proposed methods established new state-of-the-art results with class hierarchies on the WOS and DBpedia datasets in English.",
"Finally, we also open a path to study integration of knowledge into the decoding phase, which can benefit other tasks such as neural machine translation.",
"We are thankful to the Informatics' support team at PUCP, and specially to Corrado Daly.",
"We also appreciate the collaboration of Robert Aduviri and Fabricio Monsalve in a previous related project that build up our research question.",
"Besides, we thanks the comments of Fernando Alva-Manchego on a draft version and the feedback of our anonymous reviewers.",
"Finally, we acknowledge the support of NVIDIA Corporation with the donation of a Titan Xp GPU used for the study."
] | [
"method",
"objective",
"objective",
"objective",
"method",
"method",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"result",
"objective",
"method",
"other",
"other",
"other",
"other"
] |
[
"This work presents an information-theoretic operationalisation of cross-linguistic nonarbitrariness.",
"It is not a new idea that there are small, cross-linguistic associations between the forms and meanings of words.",
"For instance, it has been claimed (Blasi et al., 2016) that the word for TONGUE is more likely than chance to contain the phone [l].",
"By controlling for the influence of language family and geographic proximity within a very large concept-aligned, cross-lingual lexicon, we extend methods previously used to detect within language non-arbitrariness (Pimentel et al., 2019) to measure cross-linguistic associations.",
"We find that there is a significant effect of non-arbitrariness, but it is unsurprisingly small (less than 0.5% on average according to our information-theoretic estimate).",
"We also provide a concept-level analysis which shows that a quarter of the concepts considered in our work exhibit a significant level of cross-linguistic non-arbitrariness.",
"In sum, the paper provides new methods to detect cross-linguistic associations at scale, and confirms their effects are minor.",
"The arbitrariness of the sign, i.e. the principle that a word's form is unrelated to what it denotes, was one of the cornerstones in the structuralist revolution in linguistics (Saussure, 1916).",
"While languages do seem to adhere to the principle to a large extent, researchers have repeatedly uncovered evidence that there are preferences in formmeaning matches (Perniss et al., 2010).",
"Indeed, the notion that these small, but systematic, formmeaning relations hold across the world's languages has become a mainstream topic of research in the last couple of decades.",
"1 1 See 2 below for a brief literature review and Dingemanse et al. (2015) for a more comprehensive one.",
"Determining effective metrics to capture meaningful formmeaning associations is far from trivial, though, and researchers have explored a substantial number of statistical and heuristic approaches (Bergen, 2004; Wichmann et al., 2010; Johansson and Zlatev, 2013; Haynie et al., 2014; Gutierrez et al., 2016; Blasi et al., 2016; Joo, 2019).",
"Previous studies differ from each other along (at least) three axes:",
"(i) which unit is used to measure wordform similarity (e.g., phonemes, sub-phone-mic features or arbitrary sequences);",
"(ii) how they deploy a baseline for statistical comparison (e.g. permute forms with meanings, or propose a generative model that yields wordforms uninformed by their meaning) and",
"(iii) whether they study non-ar-bitrariness within or across languages.",
"Pimentel et al. (2019) provide the first holistic measure of non-arbitrariness (in a large vocabulary sample of a single language) using tools from information theory, and apply their measure to discover phonesthemes.",
"2 Our work extends their approach to the problem of discovering and estimating the strength of frequent cross-linguistic formmeaning associations (e.g. iconicity and systematicity) in individual concepts.",
"We do this by adapting Pimentel et",
"al.'s (2019) approach, modelling 2 Phonesthemes are sub-morphemic units which are associated in a language with some small semantic domain.",
"formmeaning associations in a large collection of basic vocabulary wordlists covering close to 3 / 4 of the world's languages (see Fig. 1 and Wichmann et al., 2020).",
"By taking the words in these lists to be random variables and asking how much information within wordforms is explained by the meaning they refer to, we obtain a quantitative estimate of cross-linguistic nonarbitrariness.",
"Specifically, we propose to model a universal (language-independent) form distribution (using neural language models), and then we estimate concept-specific distributions.",
"With these in hand, we are able to determine how much the meaning of a concept predicts its form cross-linguistically by measuring the mutual information between them; see 4 for details.",
"This method further allows us to identify which concepts exhibit stronger non-arbitrary formmeaning association and which form patterns are more likely to occur in them.",
"In order to maximise the reliability of the observed associations, we implement stringent controls for genealogical and areal effects, as well as for the size of each language family.",
"See 4.5 for details on these controls.",
"After introducing these controls, we find that wordlists display an average of around 0 .",
"01 bits of formmeaning mutual information explained by cross-linguistic nonarbitrariness ( 0 . 3% of the wordform uncertainty) with substantial variation among concepts and languages.",
"3 Of the 100 basic concepts in our data, we find a statistically identifiable pattern in 26 of them ( p < 0 . 01 ).",
"Inspection of the results show that our method recovers previously proposed associations, e.g. the association of [l] with the concept TONGUE and [p] with FULL (Blasi et al., 2016).",
"Several studies have looked at non-arbitrary patterns in languages, be it systematicity (Shillcock et al., 2001; Gutierrez et al., 2016; Dautriche et al., 2017; Pimentel et al., 2019) or iconicity (Dingemanse, 2012, 2018).",
"With respect to cross-linguistic non-arbitrariness specifically, the hypothesised sources of formmeaning associations range from the fact that humans are endowed with the same neurocognitive architecture (Bankieris and Simner, 2015) to their encountering similar experiences within the world (Parise et al., 2014).",
"While global non-arbitrary formmeaning associations have been hypothesised to exist at different levels of linguistic description (Haiman, 1980), by far the component of language that has received the most attention in this respect is the lexicon.",
"A few circumstances facilitate this type of research in contrast to other domains of grammar.",
"For instance, the space of possible words that could be used in a given language to refer to an arbitrary referent is large, whereas the relative canonical order of a verb with respect to its object complement is substantially smaller (which renders cross-linguistic similarities less informative than in the first case).",
"Additionally, the sheer amount of data available in the form of wordlists exceeds other types of linguistic data for the languages of the world.",
"As a consequence, some of the largest evaluations of non-arbitrary formmeaning associations involve systematic wordlists with comparable referents across languages (Wichmann et al., 2010; Johansson and Zlatev, 2013; Haynie et al., 2014; Blasi et al., 2016; Joo, 2019).",
"Most of these studies were focused on the regular association between phonemic or phonetic units with meaning, occasionally controlling for other potential sources of formmeaning association such as phonotactics or word length (Blasi et al., 2016).",
"While useful, the estimates emerging from this type of study can be regarded as lower bounds to the total amount of non-arbitrary associations found in the vocabulary.",
"Recent efforts have resulted in datasets with thousands of languages (Wichmann et al., 2020), with which linguists can look for universal statistical patterns (Wichmann et al., 2010; Blasi et al., 2016).",
"These studies, though, only looked at the presence (or not) of individual phones in words, not accounting for their connections.",
"Our methods rely on neural phonotactic models, similar to those used by Pimentel et al. (2020), thus capturing a broader range of potential correspondences.",
"An exceptional resource with substantial cross-linguistic representation is provided in the Automated Similarity Judgment Program, better known by its acronym ASJP (Wichmann et al., 2020).",
"ASJP is a collection of basic vocabulary wordlists, i.e. lists of words with referents that are expected to be widely attested across human societies.",
"It involves body parts, some colour terms, lower numerals, general properties (such as big or round), and flora and fauna that are usually found in places where humans live (e.g. trees and dogs).",
"The individual words in ASJP are transcribed by field linguists in a specific phonetic annotation scheme that involves 41 symbols, chosen in order to maximise cross-linguistic utility by merging rare phones with similar phonetic features within the same category.",
"These wordlists are assembled with the purpose of studying the history of languagesfollowing the tradition established by Swadesh (1955)under the principles of the comparative method.",
"ASJP has gathered, in its latest iterations, data for close to 3 / 4 of the world's languages, which makes it an unparalleled resource for evaluating formmeaning associations across spoken languages.",
"Furthermore, the vocabulary in its wordlists was chosen as so to be resistant to borrowingsmaking it especially interesting for our purposes of finding universal formmeaning biases.",
"We leave out pidgin and creole data, 4 as defined by the World Atlas of Language Structures (Dryer and Haspelmath, 2013), since they are ambiguous with relation to their genealogical affili-ation.",
"We also omit constructed and fake languages (e.g. Esperanto and Taensa).",
"This leaves us 9148 doculects (or wordlists) from 5189 languages.",
"5 Formmeaning associations have been studied in earlier versions of this dataset.",
"Firstly, Wichmann et al. (2010) studied the average form across different concepts in ASJP, and found a number of tentative patterns pointing to non-arbitrariness.",
"Yet the lack of historical and statistical controls compromised the nature of such patterns: formmeaning associations could be due to widespread linguistic contact (e.g. the word for DOG , Pache et al. 2016) or to its fortuitous presence in large families.",
"Blasi et al. (2016), however, provide a conservative evaluation of individual formmeaning associations by imposing a restrictive set of conditions.",
"They looked for associations that were present in a minimum number of continents and language families.",
"This resulted in a sizable number of non-arbitrary 4 Pidgins are believed to rely particularly on iconicity due to their smaller degree of lexicalisation; this reliance then diminishes as it morphs into a creole (Romaine, 1988).",
"Future work could expand the methods here to study this phenomenon.",
"5 When there is more than one wordlist for one language (as defined by their ISO-codes) one can sometimes refer to them as different dialects, but these are often just alternative versions of the same language as recorded by different linguists.",
"There can be as much variation in such different recordings as among different dialects recorded by one and the same linguist.",
"For those reasons, it is practical to use the term doculect , which we adopt here.",
"This is a neutral term that refers to some dialect as recorded in some specific source .",
"associations, many of which had been highlighted as interesting based on behavioural and linguistic experiments in a handful of languages.",
"Data Disclaimer.",
"As mentioned above, ASJP gathers lists of wordforms that are expected to be present across most human societies and their corresponding language(s).",
"While this guarantees a fair coverage in our study, it limits the scope of our conclusions to those concepts present herein.",
"We describe each word as comprised by form and meaning, which we represent as a pair ( w ( n ) , v ( n ) ) .",
"The form w ( n ) is represented as a phone string where is a phonetic alphabet.",
"In this work, we take to be the set of 41 phonetic symbols in ASJP plus the end-of-string symbol.",
"We write W to denote a -valued random variable.",
"The meaning v ( n ) { 0 , 1 } K is represented by a one-hot vector, where K is the number of analysed concepts.",
"6 We write V to denote a { 0 , 1 } K -valued random variable.",
"The goal of this work is to measure cross-linguistic formmeaning associations, operationalised as the mutual information (MI) between a form-valued random variable W and a meaning-valued random variable V .",
"Symbolically, we are interested in computing (Cover and Thomas, 2012): I( W ; V ) = H( W ) H( W | V ) (1) Intuitively, this quantity captures the uncertainty we have over the form, the entropy H( W ) , minus how much uncertainty we have over the form given the meaning, the conditional entropy H( W | V ) .",
"Thus, if eq.",
"(1) is zero, its minimum, we have the result that meaning tells us absolutely nothing about the wordform.",
"On the other hand, if eq.",
"(1) is min { H( W ) , H( V ) } , its maximum, we have that the form is a deterministic function of the meaning (or the opposite; the meaning being deterministically determined given the form).",
"6 We note Pimentel et al. (2019) used high-dimensional distributional semantic vectors to represent meaning, while we use a one-hot vector.",
"However, their work relied on a specific language's WORD 2 VEC a choice which could potentially bias our results with that language's properties.",
"We did, however, run an extra experiment with English WORD 2 VEC ; this led to similar conclusions to the ones presented here.",
"That the mutual information may take values in [0 , min { H( W ) , H( V ) } ] together with the fact that, for our specific study, H( W ) is smaller than H( V ) suggests a more interpretable metric called the uncertainty coefficient : U( W | V ) = I( W ; V ) H( W ) (2) This quantity is the proportion of uncertainty in the form reduced by knowing the meaning.",
"Both mutual information and uncertainty coefficients are general measures of non-arbitrariness.",
"One might also inquire about how non-arbitrary a single form meaning pair is.",
"To measure this, we propose pointwise mutual information (PMI): PMI( w ; v ) = log p ( w | v ) p ( w ) (3) 4.3 Approximating Mutual Information As noted above, we want to estimate the entropy of language agnostic wordforms, i.e. H( W ) = (cid:88) w * p ( w ) log 1 p ( w ) (4) Unfortunately, we do not know the exact distribution of p ( w ) and, even if we did, we would need to sum over the infinite set of possible strings * to compute this entropy, which is intractable.",
"If we have another probability distribution p ( w ) , though, we can calculate the cross-entropy between them as an approximation, i.e. H( W ) H ( W ) 1 NN (cid:88) n =1 log 1 p ( w ( n ) ) (5) where { w ( n ) } Nn =1 are samples from the true distribution p .",
"Throughout the paper, the tilde marks held-out data, i.e., data not used during model training.",
"We note that the approximation becomes exact as N by the weak law of large numbers.",
"This cross-entropy estimate gives us an upper bound on the actual entropy.",
"This bound is tighter the closer the distributions p ( w ) and p ( w ) are.",
"How should we train a model to estimate this universal phonotactic distribution p ( w ) , though?",
"We train a phone-level language model to predict the next phone given previous ones in a word, i.e. p ( w ) = | w | (cid:89) t =1 p ( w t | w <t ) (6) In this work, we use an LSTM as our language model (Hochreiter and Schmidhuber, 1997).",
"Each phone w t is represented using a lookup embedding z t R d .",
"These are fed into the LSTM, outputting temporal representations of the sequence: h t = LSTM( z t 1 , h t 1 ) (7) where h 0 is the zero vector.",
"These representations are linearly transformed and used in a softmax to approximate the probability distribution: p ( w t | w <t ) = softmax ( W h t + b ) (8) All parameters are learned via gradient descent, minimising the cross-entropy in the training set.",
"As mentioned before, salient regularities between form and meaning across languages might result from large groups of genealogically or spatially related languages.",
"In particular it is practical to consider two independent problems in this respect:",
"(i) Eq.",
"(5)'s inequality only holds if H ( W ) is estimated on a set of datapoints sampled independently from the set of points on which the model p was trained.",
"As such, the test set should only include languages that are not genealogically or areally related to those in the training set;",
"(ii) Within our dataset, the different size of areal and genealogical groups should be accounted for so that our results are not biased towards particularly large areas or language families.",
"Traintest split.",
"To mitigate the problem referred to in the first item, we cross-validate our models by appealing to the notion of macroareas, large-scale regions of the world that simultaneously maximise internal historical dependency while minimising external ones.",
"Striking a balance between historical independence and data availability, we consider the following four macroareas: the Americas, Eurasia, Africa, and the Pacific (which in this instantiation includes Papua New Guinea and Australiasee Fig. 1).",
"We will use these macroareas as our folds.",
"Two macroareas will be used at each time for training, while one other is used for validation and the last for testing.",
"Some language families, though, might be present in more than one macroarea (e.g. many European languages are spoken natively in the Americas and Africa).",
"These families will be assigned to the one macroarea which contains most of its family members, since we believe reducing genealogical impact should be preferred over areal impact for our data and purposes, in cases for which such a choice is required.",
"7 Family size bias.",
"The second problem is tackled by weighting each example's contribution to our loss function by the inverse of its family size l ( n ) : L ( ) = 1 LN (cid:88) n =1 1 l ( n ) log 1 p ( w ( n ) ) (9) where L = (cid:80) Nn =1 1 l ( n ) re-normalises the cross-entropy using the family sizes.",
"This weighted cross-entropy loss function makes per instance contributions of large language families smaller, reducing their impact on the trained model.",
"To mitigate the same bias effect on the evaluation of validation and test sets, we first get cross-entropies per word.",
"We subsequently average them per language, per family, and per macroarea.",
"This way, each family will have the same effect per macroarea and each macroarea will have the same effect on the overall cross-entropy.",
"We want to compare per-concept phonotactic models with general ones to analyse soundmeaning associations.",
"With that in mind, we condition phone-level language models on meaning: p ( w | v ) = | w | (cid:89) t =1 p ( w t | w <t , v ) (10) These models are trained following the same procedures explained above, but conditioning the LSTMs on concept specific representations.",
"Specifically, the one-hot representation is linearly transformed and fed into the LSTM as its initial state h 0 = W 0 v (11) where the linear transformation W 0 R d K is randomly initialised and learned with the rest of the model.",
"We then use this distribution to estimate the conditional entropy, analogously to eq.",
"(5), as in H( W | V ) (cid:46) 1 NN (cid:88) n =1 log 1 p ( w ( n ) | v ( n ) ) (12) 7 As mentioned in 3, the list of concepts in ASJP was chosen to minimise borrowings across languages.",
"We further note here that loan words are annotated in this dataset and we drop those words for the purpose of our analysis.",
"where { w ( n ) , v ( n ) } Nn =1 are held-out from meaning pairs, sampled from the true distribution.",
"The mutual information between wordforms and meaning can be decomposed into the difference of two entropy measures.",
"Unfortunately, we have no way of directly measuring these entropy values without their probability distributions ( p ( w ) and p ( w | v ) ).",
"We use the estimated cross-entropies as an approximation to this mutual information: I( W ; V ) = H( W ) H( W | V ) (13) H ( W ) H ( W | V ) (14) We note that eq.",
"(14) is approximate because it is the difference of two upper bounds.",
"Furthermore, while there are many ways to estimate mutual information, computing it as the difference between two cross-entropies seems to produce consistent results (McAllester and Stratos, 2020).",
"As mentioned in 4.3, our entropy upper bounds will be tighter if our models p better capture p .",
"With this in mind, we optimise the hyper-parameters of our models using Bayesian optimisation with a Gaussian process prior (Snoek et al., 2012)hyper-parameter ranges are presented in App.",
"A. We train 25 models for each configuration and choose the best one according to the validation set, optimising our weighted cross-entropy loss using AdamW (Loshchilov and Hutter, 2019).",
"We are interested in estimating the cross-linguistic mutual information between meaning and wordforms.",
"With this in mind, we follow the steps described in 4.4, but instead of only 1 model, we train 25 models using different seeds for each fold (totalling 100 models).",
"Average resultsoverall and per macroareaare shown in Tab.",
"1. 8 This meaning conditioned model may potentially be better than the raw LSTMs (without conditioning on meaning; due to the extra parameters).",
"To control for this fact, we ran an extra experiment where we estimated H( W ) using the meaning dependent model with shuffled concept IDs (so there is no formmeaning association).",
"The results from this shuffled IDs model were very similar to the raw LSTM ones.",
"9 Our code is available at https://github.com/ rycolab/form-meaning-associations .",
"Across macroareas, results indicate a small average contribution of meaning into form (in all cases smaller than 1% ).",
"10 A simple permutation test (explained later in this section) indicates that, under standard levels of significance ( = 0 . 01 ) and after controlling for multiple comparisons, 11 this average quantity is significant in 2 out of 4 of the macroareas.",
"Nevertheless, this should not be overinterpreted, as unaccounted factors might be responsible for these effects; for instance, the impact of shared history across families in regions smaller than macroareas (almost all human languages have been in contact, directly or indirectly).",
"Hence it is reasonable to conclude that there is no definitive evidence for an overall average association at this level of description of the data.",
"We consider specific concept formmeaning associations next.",
"12 10 For comparison, Pimentel et al. (2019) estimate intra-language systematicity only accounts for roughly 3 5% of the entropy in wordforms in English, German and Dutch (given a characteristic sample of the vocabulary).",
"11 All our experiments rely on Benjamini and Hochberg (1995) corrections 12 We ran an experiment changing the macroarea combinations in the train-validation-test sets and the results were stable, leading only to minor numerical changes to Tab.",
"Paired Permutation Tests.",
"For the permutation test, we first get the average MI over the 25 random seed results for a macroarea.",
"We then permute the signs on these 25 results to create 10 5 new average MIs.",
"By comparing the original result with these permutation ones we get the probability that our MI estimate is significantly larger than zero.",
"A relevant detail is that these tests are performed on estimatesas opposed to real MI.",
"The mutual information is always non-negative, but our estimate is not.",
"If the MI is zero, we expect our estimates to be negative half the time, since both upper bounds should be roughly equivalent H ( W ) H ( W | V ) .",
"A note on the LSTMs' quality.",
"Our results strongly rely on the quality of approximations.",
"Our language independent H( W ) estimate is 3 .",
"85 bits per phone.",
"Meanwhile, the per-language phonotactic cross-entropy found by Pimentel et al. (2020) is, on average, roughly 3 bits per phonegenerally speaking, these results seem consistent.",
"13 Furthermore, our model's cross-entropy on the training set is 3.73while it may have overfit slightly, this is not an aberration.",
"In this section we focus on concept-specific form meaning associations.",
"With this in mind we group all words for a specific concept c C into a set: S c = (cid:110) ( w ( n ) , v ( n ) ) | v ( n ) = c C (cid:111) (15) For each such set, we run a permutation test on their approximated pointwise mutual information 13 These results are not directly comparable, though, since words are encoded with different phonetic alphabets in ASJP and NorthEuraLex (Dellert et al., 2020).",
"values PMI( w ( n ) ; v ( n ) ) , assessing if a concept has a statistically significant soundmeaning association.",
"14 Of the 100 concepts in our dataset, 26 of them have positive mutual information ( p < 0 . 01 ).",
"This means that, at least in the set of concepts represented in our dataset, non-arbitrary formmeaning associations are not exceptions.",
"We present the average uncertainty coefficient per concept compared to average wordform length in Fig.",
"2. We do not find any correlation between these measurements.",
"Analysing these results more closely, we see the pronouns I and you present the highest coefficient values.",
"Most colours in our dataset ( white , red , green , yellow ) show statistically positive MI.",
"Furthermore, some concepts related to body parts ( tongue , skin , knee , heart , claw ) and several concepts related to the environment ( water , sand , star , cloud , dry , cold ) have statistically positive results.",
"Wichmann et al. (2010) also looked at how concepts differ in their degree of formmeaning associations, presenting them in an ordered list together with a measure of how much they deviate from a global average phone usage.",
"They only look at isolated phone's frequencies, though, and do not control for word lengthour mutual information metric controls for both factors.",
"When we compare our results to Wichmann et",
"al.'s (2010) top 10 list of concepts, we see both contain several body parts ( tongue , skin , knee ) and pronouns ( I , you ).",
"14 This permutation test is similar to the one in 5.1, but uses the family size corrections discussed in 4.5 when averaging resultsi.e., for each permutation (and the original one) we average words, languages, families, and macroareas, in this sequence, to get the MI estimate.",
"In their position paper, Perniss et al. (2010) argue that non-arbitrariness is a general property of language, although sometimes believed to be an exception.",
"They further state that: if we look at the lexicon of English (or that of other Indo-European languages), we might be forgiven for thinking that there could be anything but a conventionally determined, arbitrary connection between a given word and its referent. For the vast majority of English words there is an arbitrary relationship between form and meaning.",
"In fact, in our results we do not find positive MI values, on average, for English.",
"In this section, we analyse results per language, trying to find signs of cross-linguistic non-arbitrary associations in them.",
"Analogously to what we did with concepts, we run permutations tests using the PMIs for the set of words in each language (i.e. sets S l analogous to S c in eq.",
"(15)).",
"Fig. 3 presents the per-language uncertainty coefficient values in a world map.",
"There are 5189 languages in ASJP, out of those we find that only 85 have significantly positive mutual information ( p < 0 . 01 ).",
"Each language, though, has at most 100 values (the number of concepts), making this a hard statistical test after correcting for the multiple tests.",
"If we relax our hypothesis testing thresholds to p < 0 .",
"05 (an admittedly much weaker test), then 242 languages present statistically positive MIthis suggests that, although maybe not common, formmeaning patterns are not a rare exception restricted to a small number of languages.",
"We now turn to the relationship between concepts and the phones which appear in them, trying to assess specific conceptphone pairs which present positive MI.",
"Such a positive value would indicate that concept informs on the presence of that specific phone, suggesting a non-arbitrary association between them.",
"Similarly to before, we create sets of concepttoken pairs: S c ,s = (cid:110) ( w ( n ) t , v ( n ) ) | (16) v ( n ) = c C, w ( n ) t = s (cid:111) where ( c , s ) is the analysed concepttoken pair and w ( n ) t is the t th token of word w ( n ) .",
"During this analysis, though, we focus on conceptphone pairs which had statistically significant PMIs in all four macroareas, following the controls introduced in Blasi et al. (2016), as a way of maximising the chances of finding true history-independent associations (under the risk of increasing the rate of false negatives).",
"With that in mind, we split sets S c,t per macroarea and got the PMI values for each of them, similarly to 5.2.",
"We threw away pairs which did not occur at least 1000 times together and ran a permutation test with 10 5 permutations for each concepttokenmacroarea tuple.",
"We note a concepttoken association does not make a pair probable; the token is simply more likely to appear with the concept than would be without it.",
"Tab.",
"2 presents pairs which were significant in all macroareas ( p < 0 . 01 after corrections).",
"After analysis, we find a few interesting results.",
"As mentioned in 1, we see an association between [l] and the concept TONGUE and between [p] and FULL , similarly to Blasi et al. (2016).",
"We also see an association between pronounse.g. I, WE , YOU and the end-of-string [ # ].",
"15 This was expected; pronouns are very frequent words in most languages, and such words are usually shorter (Zipf, 1949).",
"As previously found by Blasi et al. (2016), the concept BREAST has a significant association with both [m] and [u].",
"As they point out, these might be due to the mouth configuration of suckling babies or the sounds they produce when feeding (Jakob-son, 1960; Traunmuller, 1994).",
"We further find several other pairs which are supported by their findings: HORN [k,r]; KNEE [o,u,k]; LEAF [l,p]; WE [n].",
"Furthermore, a nice sanity check is that none of the negative conceptpair associations they found are present in our results.",
"As a final experiment, we analyse the importance of splitting traintest sets according to macroareas (as discussed in 4.5) in order to minimise areal effectsversus simply splitting languages based on their families.",
"Even though the list of concepts in ASJP was designed to be resistant to borrowings (and we further remove loan words from our analysis), language contacts beyond loan words could still impact results.",
"One such example is the (potential) impact of Basque in Spanish phonology, which lost word initial / f / in many words, e.g. hablar , during the late Middle Ages (see pg. 91 of Penny, 2002, for a longer discussion).",
"We create 4 folds, splitting them based on glot-tocode language families, and use 4-fold cross-validation to get family-split resultsin opposition to the macroarea-split results.",
"Using family-splits we get an I( W ; V ) = 0 .",
"020 bits, with an uncertainty coefficient of 0.53% (averaged over the 4-15 The association of a concept with the [ # ] symbol means the model can more easily predict the end-of-word when conditioned on this concept. This means the length of that concept is not distributed as the average, being more predictable. folds)this is almost twice the overall MI found on the macroarea-splits.",
"A Welch's t -test between both runs shows family-splits have a larger MI than the macroarea results ( p < 0 . 01 ), suggesting it is important to control for areal effects when evaluating soundmeaning associations.",
"In this paper we have provided a holistic assessment of formmeaning associations involving words found in the basic vocabulary in a large number of languages.",
"In agreement with previous findings, we find that on average the meaning does not contribute substantially to the form of the words, but instead the most consistent associations were restricted to a specific subset of all of the words analysed.",
"We find a list of 26 concepts (out of the 100 analysed) with statistically significant formmeaning associationssuggesting that cross-linguistic non-arbitrariness is not a rare exception.",
"Finally, we also find a set of conceptphone pairs with a consistently positive relationship across the four analysed macroareas.",
"This paper concerns itself with investigating cross-linguistic formmeaning associations.",
"We see no direct ethical concerns relating to this work, as it only involves computational experiments on previously collected data.",
"Sren Wichmann's research was partly funded by a subsidy from the Russian government to support the Programme of Competitive Development of Kazan Federal University, Russia.",
"Damian E. Blasi acknowledges funding from the Branco Weiss Fellowship, administered by the ETH Zurich.",
"Damian E. Blasi's research was also executed within the framework of the HSE University Basic Research Program and funded by the Russian Academic Excellence Project 5-100'.",
"As mentioned in 4.8, we used Bayesian optimisation to tune the model's hyper-parameters.",
"We consider a log-uniform prior over the embedding size (from 4 to 1024 ), and over the size of the hidden state ( 32 to 1024 ).",
"We also considered a uniform prior over the number of layers ( 1 to 4 ) and dropout ( 0 to 0 . 5 )."
] | [
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"method",
"method"
] |
[
"The development of a fictional plot is centered around characters who closely interact with each other forming dynamic social networks.",
"In literature analysis, such networks have mostly been analyzed without particular relation types or focusing on roles which the characters take with respect to each other.",
"We argue that an important aspect for the analysis of stories and their development is the emotion between characters.",
"In this paper, we combine these aspects into a unified framework to classify emotional relationships of fictional characters.",
"We formalize it as a new task and describe the annotation of a corpus, based on fan-fiction short stories.",
"The extraction pipeline which we propose consists of character identification (which we treat as given by an oracle here) and the relation classification.",
"For the latter, we provide results using several approaches previously proposed for relation identification with neural methods.",
"The best result of 0.45 F 1 is achieved with a GRU with character position indicators on the task of predicting undirected emotion relations in the associated social network graph.",
"Every fictional story is centered around characters in conflict (Ingermanson and Economy, 2009) which interact, grow closer or apart, as each of them has ambitions and concrete goals (Acker-man and Puglisi, 2012, p. 9).",
"Previous work on computational literary studies includes two tasks, namely social network analysis and sen-timent/emotion analysis, both contributing to a computational understanding of narrative structures.",
"We argue that joining these two tasks leverages simplifications that each approach makes when considered independently.",
"We are not aware of any such attempt and therefore propose the task of emotional character network extraction from fictional texts, in which, given a text, a network is to be generated, whose nodes correspond to characters and edges to emotions between characters.",
"One of the characters is part of a trigger/cause for the emotion experienced by the other.",
"Figure 1 depicts two examples for emotional character interactions at the text level.",
"Such relation extraction is the basis for generating social networks of emotional interactions.",
"Dynamic social networks of characters are analyzed in previous work with different goals, e.g. , to test the differences in interactions between various adaptations of a book (Agarwal et al., 2013); to understand the correlation between dialogue and setting (Elson et al., 2010); to test whether social networks derived from Shakespeare's plays can be explained by a general sociological model (Nal-isnick and Baird, 2013); in the task of narrative generation (Sack, 2013); to better understand the nature of character interactions (Piper et al., 2017).",
"Further, previous work analyses personality traits of characters (mostly) independently of each other (Massey et al., 2015; Barth et al., 2018; Bamman et al., 2014).",
"Emotion analysis in literature has focused on the development of emotions over time, abstracting away who experiences an emotion (Reagan et al., 2016; Elsner, 2015; Kim et al., 2017; Piper and Jean So, 2015, i.a. ).",
"Fewer works have ad-Hermione looked at Draco curiously...",
"dressed the annotation of emotion causes, e.g. , Neviarouskaya and Aono (2013), Ghazi et al. (2015), Saur and Pustejovsky (2009), and Kim and Klinger (2018).",
"To the best of our knowledge, there is no previous research that deals with emotional relationships of literary characters.",
"The works that are conceptually the closest to our paper are Chaturvedi et al. (2017) and Massey et al. (2015), who use a more general set of relationship categories.",
"Most approaches to emotion classification from text build on the classes proposed by Plutchik (2001) and Ekman (1992).",
"Here, we use a discrete emotion categorization scheme based on fundamental emotions as proposed by Plutchik.",
"This model has previously been used in computational analysis of literature (Mohammad, 2012, i.a. ).",
"We refer the reader to social psychology literature for more details on the emotional relationship between people (Burkitt, 1997; Gaelick et al., 1985).",
"The main contributions of this paper are (1) to propose the new task of emotional relationship classification of fictional characters, (2) to provide a fan-fiction short story corpus annotated with characters and their emotional relationships, and (3) to provide results for relation extraction models for the task.",
"We evaluate our models on the textual and the social network graph level and show that a neural model with positional indicators for character roles performs the best.",
"An additional analysis shows that the task of character relationship detection leads to higher performance scores for polarity detection than for more fine-grained emotion classes.",
"Differences between models are minimal when the task is cast as a polarity classification but are striking for emotion classification.",
"This work has potential to support a literary scholar in analyzing differences and commonalities across texts.",
"As an example, one may consider Goethe's The Sorrows of Young Werther (Goethe, 1774), a book that gave rise to a plethora of imitations by other writers, who attempted to depict a similar love triangle between main characters found in the original book.",
"The results of our study can potentially be used to compare the derivative works with the original (see also Barth et al., 2018).",
"( C exp , e, C cause ) , in which the character C exp feels the emotion e (mentioned in text explicitly or implicitly).",
"The character C cause is part of an event which triggers the emotion e .",
"We consider the eight fundamental emotions defined by Plutchik (2001) (anger, fear, joy, anticipation, trust, surprise, disgust, sadness).",
"Each character corresponds to a token sequence for the relation extraction task and to a normalized entity in the graph depiction.",
"Using WebAnno (Yimam et al., 2013), we annotate a sample of 19 complete English fan-fiction short stories, retrieved from the Archive of Our Own project 1 (due to availability, the legal possibility to process the texts and a modern language), and a single short story by Joyce (1914) (Counter-parts) being an exception from this genre in our corpus.",
"All fan-fiction stories were marked by the respective author as complete, are shorter than 1500 words, and depict at least four different characters.",
"They are tagged with the keywords emo-tion and relationships.",
"The annotators were instructed to mark every character mention with a canonical name and to decide if there is an emotional relationship between the character and another character.",
"If so, they marked the corresponding emotion phrase with the emotion labels (as well as indicating if the emotion is amplified, downtoned or negated).",
"Based on this phrase annotation, they marked two relations: from the emotion phrase to the experiencing character and from the emotion phrase to the causing character (if available, i.e. , C cause can be empty).",
"One character may be described as experiencing multiple emotions.",
"We generate a consensus annotation by keeping all emotion labels by all annotators.",
"This is motivated by the finding by Schuff et al. (2017) that such high-recall aggregation is better modelled in an emotion prediction task.",
"The data is available at http://www.ims.uni-stuttgart.de/data/ relationalemotions.",
"Inter-Annotator Agreement We calculate the agreement along two dimensions, namely unlabelled vs. labeled and instance vs. graph-level.",
"Table 1 reports the pairwise results for three annotators.",
"In the Inst.",
"labelled setting, we accept an instance being labeled as true positive if both annotators marked the same characters as experiencer and cause of an emotion and classified their in-1 https://archiveofourown.org a1a2 a1a3 a2a3 Inst.",
"teraction with the same emotion.",
"In the Inst.",
"unlabelled case, the emotion label is allowed to be different.",
"On the graph level ( Graph labelled and Graph unlabelled ), the evaluation is performed on an aggregated graph of interacting characters, i.e. , a relation is accepted by one annotator if the other annotator marked the same interaction somewhere in the text.",
"We use the F 1 score to be able to measure the agreement between two annotators on the span levels.",
"For that, we treat the annotations from one annotator in the pair as correct and the annotations from the other as predicted.",
"As Table 1 shows, agreement on the textual level is the lowest with values between 19 and 33 % (depending on the annotator pair), which also motivated our aggregation strategy mentioned before.",
"The values for graph-labelled agreement are more relevant for our use-case of network generation.",
"The values are higher (6693 %), showing that annotators agree when it comes to detecting relationships regardless of where exactly in the text they appear.",
"Statistics.",
"Table 2 summarizes the aggregated results of the annotation.",
"The column All lists the number of experiencer annotations (with an emotion), the column Rel. refers to the counts of emotion annotations with both experiencer and cause.",
"Joy has the highest number of annotated instances and the highest number of relationship instances (413 and 308 respectively).",
"In contrast, sadness has the lowest number of annotations with a total count of instances and relations being 97 and 64 respectively.",
"Overall, we obtain 1335 annotated instances, which we use to build and test our models.",
"Figure 2 depicts the process flow for each of the models.",
"We distinguish between directed and Emotion All Rel.",
"undirected relation prediction.",
"In the directed scenario, we classify which character is the experiencer and which character is the cause, as well as what is the emotion between two characters.",
"For the undirected scenario, we only classify the emotion relation between two characters.",
"We do not tackle character name recognition here: our models build on top of gold character annotations.",
"The baseline model predicts the emotion for a character pair based on the NRC dictionary (Mo-hammad and Turney, 2013).",
"It accepts the emotion associated with the words occurring in a window of n tokens around the two characters, with n being a parameter set based on results on a development set for each model (see supplementary material for more details).",
"Further we cast the relation detection as a machine learning-based classification task, in which each classification instance consists of two character mentions with up to n tokens context to the Story NER+Coref.",
"left and to the right of the character mentions.",
"We compare an extremely randomized tree classifier with bag-of-words features (Geurts et al., 2006) ( BOW-RF ) with a two-layer GRU neural network (Chung et al., 2014) with max and averaged pooling.",
"In the latter, we use different variations of encoding the character positions with indicators (in-spired by Zhou et al. (2016), who propose the use of positional indicators for relation detection).",
"Our variations are exemplified in Table 3.",
"Note that the case of predicting directed relations is simpli-fied in the Role and MRole cases in contrast to Entity and MEntity, as the model has access to gold information about the relation direction.",
"We obtain word vectors for the embedding layer from GloVe (pre-trained on Common Crawl, d = 300 , Pennington et al., 2014) and initialize out-of-vocabulary terms with zeros (including the position indicators).",
"Experimental Setting.",
"In the classification experiments, we compare the performance of our models on different label sets.",
"Namely, we compare the complete emotion set with 8 classes to a 5 class scenario where we join anger and disgust , trust and joy , as well as anticipation and surprise (based on preliminary experiments and inspection of confusion matrices).",
"The 2-class scenario consists of positive ( anticipation, joy, trust, surprise ) and negative relations ( anger, fear, sadness, disgust ).",
"For each set of classes, we consider a setting where directed relations are predicted with one where the direction is ignored.",
"Therefore, in the directed prediction scenario, each emotion constitutes two classes to be predicted for both possible directions (therefore, 16, 10, and 4 labels exist).",
"The evaluation is performed with precision, recall and F 1 in a cross-story validation setting, in which each story is used as one separate test/validation source.",
"For model selection and meta-parameter optimization, we use 50 % randomly sampled annotations from this respective test/validation instance as a validation set and the remainder as test data.",
"Further, we evaluate on three different levels of granularity: Given two character mentions, in the instance-level evaluation, we only accept the prediction to be correct if exactly the same mention has the according emotion annotation.",
"We then aggregate the different true positive, false positive and false negative values across all stories before averaging to an aggregated score (similar to micro-averaging).",
"On the story-level, we also accept a prediction to be a true positive the same way, but first calculate the result P/R/F 1 for the whole story before averaging (similar to macro-averaging).",
"On the graph-level, we accept a prediction for a character pair to be correct without considering the exact position.",
"Results.",
"Table 4 shows the results (precision and recall shown in supplementary material) on development data and independent test data for the best models.",
"The GRU+MRole model achieves the highest performance with improvement over BOW-RF on the instance and story levels, and shows a clear improvement over the GRU+NoInd.",
"model in the directed 8-class setting.",
"GRU+Role achieves the highest performance on the graph level in the directed 8-class setting.",
"In the undirected prediction setting, all models perform better in the 5-class experiment and 2-class experiment than in 8-class experiment.",
"This is not always the case for the directed prediction, where some models perform better in 8-class experiment (GRU+NoInd., GRU+Entity, BOW-RF).",
"We observe that the difference in F 1 score between the baseline, bag-of-words model and our GRU models in a 2-class experiment is marginal.",
"This may be an indicator that the binary representation harms the classification of emotional relations between characters, as they can be nuanced and do not always perfectly map to either positive and negative classes.",
"On the other side, a more sophisticated classification approach is necessary to capture these nuanced differences.",
"As expected, we observe a better performance on a graph level for all models, with the highest performance of 47 % F 1 (GRU+MEntity), 63 % F 1 (GRU+MEntity), and 73 % F 1 (GRU+MRole, GRU+MEntity, GRU+NoInd.) in undirected 8-, 5, and 2-class experiments, respectively, on the development set.",
"In the directed scenario, the highest performances are 41 % F 1 (GRU+Role), 48 % F 1 (GRU+MRole), and 65 % F 1 (GRU+MRole).",
"The results show that the sequential and embedding information captured by a GRU as well as additional positional information are all relevant for a substantial performance, at least on the fine-grained emotion prediction task.",
"In this paper, we formulated the new task of emotional character network extraction from fictional texts.",
"We argued that joining social network analysis of fiction with emotion analysis leverages simplifications that each approach makes when considered independently.",
"We presented a publicly available corpus of fan-fiction short stories annotated with character relations and proposed several relation classification models.",
"We showed that a recurrent neural architecture with positional indicators leads to the best results of relation classification.",
"We also showed that differences between different machine learning models with binary mapping of emotion relation is almost leveled.",
"This may suggest that emotion relation classification is best modeled in a multi-class setting, as emotional interactions of fictional characters are nuanced and do not simply map to either a positive or a negative class.",
"For future work we propose to develop a real-world application pipeline in which character pairs are not given by an oracle, but rather extracted from text automatically using named entity recognition.",
"To better understand the relation between instance and graph levels, we propose to explore the best strategy for edge labeling either by a majority vote or accepting the edges with the highest confidence scores.",
"Further, modeling the task in an end-to-end learning setting from text to directly predict the graph, in the spirit of multi-instance learning, is one of the next steps.",
"To that end, we suggest obtaining more gold data with character relations and optimize the pipeline towards the best performance on additional data.",
"This research has been conducted within the CRETA project (http://www.creta.uni-stuttgart. de/) which is funded by the German Ministry for Education and Research (BMBF) and partially funded by the German Research Council (DFG), projects SEAT (Structured Multi-Domain Emotion Analysis from Text, KL 2869/1-1).",
"We thank Laura-Ana-Maria Bostan and Heike Adel for fruitful discussions."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"result",
"abstain",
"objective",
"objective",
"abstain",
"result",
"other",
"other"
] |
[
"Identifying the intent of a citation in scientific papers (e.g., background information, use of methods, comparing results ) is critical for machine reading of individual publications and automated analysis of the scientific literature.",
"We propose structural scaffolds, a multitask model to incorporate structural information of scientific papers into citations for effective classification of citation intents.",
"Our model achieves a new state-of-the-art on an existing ACL anthology dataset (ACL-ARC) with a 13.3% absolute increase in F1 score, without relying on external linguistic resources or hand-engineered features as done in existing methods.",
"In addition, we introduce a new dataset of citation intents (Sci-Cite) which is more than five times larger and covers multiple scientific domains compared with existing datasets.",
"Our code and data are available at: https://github.com/ allenai/scicite .",
"Citations play a unique role in scientific discourse and are crucial for understanding and analyzing scientific work (Luukkonen, 1992; Leydesdorff, 1998).",
"They are also typically used as the main measure for assessing impact of scientific publications, venues, and researchers (Li and Ho, 2008).",
"The nature of citations can be different.",
"Some citations indicate direct use of a method while some others merely serve as acknowledging a prior work.",
"Therefore, identifying the intent of citations (Figure 1) is critical in improving automated analysis of academic literature and scientific impact measurement (Leydesdorff, 1998; Small, 2018).",
"Other applications of citation intent classification are enhanced research experience (Moravcsik and Murugesan, 1975), information retrieval (Ritchie, 2009), summarization (Co-.",
"han and Goharian, 2015), and studying evolution of scientific fields (Jurgens et al., 2018).",
"In this work, we approach the problem of citation intent classification by modeling the language expressed in the citation context.",
"A citation context includes text spans in a citing paper describing a referenced work and has been shown to be the primary signal in intent classification (Teufel et al., 2006; Abu-Jbara et al., 2013; Jurgens et al., 2018).",
"Existing models for this problem are feature-based, modeling the citation context with respect to a set of predefined hand-engineered features (such as linguistic patterns or cue phrases) and ignoring other signals that could improve prediction.",
"In this paper we argue that better representations can be obtained directly from data, sidestepping problems associated with external features.",
"To this end, we propose a neural multitask learning framework to incorporate knowledge into citations from the structure of scientific papers.",
"In particular, we propose two auxiliary tasks as structural scaffolds to improve citation intent prediction: 1 (1) predicting the section title in which the citation occurs and (2) predicting whether a sentence needs a citation.",
"Unlike the primary task of citation intent prediction, it is easy to collect large 1 We borrow the scaffold terminology from Swayamdipta et al. (2018) in the context of multitask learning.",
"amounts of training data for scaffold tasks since the labels naturally occur in the process of writing a paper and thus, there is no need for manual annotation.",
"On two datasets, we show that the proposed neural scaffold model outperforms existing methods by large margins.",
"Our contributions are:",
"(i) we propose a neural scaffold framework for citation intent classification to incorporate into citations knowledge from structure of scientific papers;",
"(ii) we achieve a new state-of-the-art of 67.9% F1 on the ACL-ARC citations benchmark, an absolute 13.3% increase over the previous state-of-the-art (Jurgens et al., 2018); and",
"(iii) we introduce SciCite, a new dataset of citation intents which is at least five times as large as existing datasets and covers a variety of scientific domains.",
"We propose a neural multitask learning framework for classification of citation intents.",
"In particular, we introduce and use two structural scaffolds, auxiliary tasks related to the structure of scientific papers.",
"The auxiliary tasks may not be of interest by themselves but are used to inform the main task.",
"Our model uses a large auxiliary dataset to incorporate this structural information available in scientific documents into the citation intents.",
"The overview of our model is illustrated in Figure",
"2. Let C denote the citation and x denote the citation context relevant to C .",
"We encode the tokens in the citation context of size n as x = { x 1 , ..., x n } , where x i R d 1 is a word vector of size d 1 which concatenates non-contextualized word representations (GloVe, Pennington et al., 2014) and contextualized embeddings (ELMo, Peters et al., 2018), i.e.: x i = (cid:2) x GloVe i ; x ELMo i (cid:3) We then use a bidirectional long short-term mem-ory (Hochreiter and Schmidhuber, 1997) (BiL-STM) network with hidden size of d 2 to obtain a contextual representation of each token vector with respect to the entire sequence: 2 h i = (cid:2) LSTM( x , i ); LSTM( x , i ) (cid:3) , where h R ( n, 2 d 2 ) and LSTM( x , i ) processes x from left to write and returns the LSTM hidden state at position i (and vice versa for the backward direction LSTM ).",
"We then use an attention mechanism to get a single vector representing the whole input sequence: z = n (cid:88) i =1 i h i , i = softmax( w (cid:62) h i ) , where w is a parameter served as the query vector for dot-product attention.",
"3 So far we have obtained the citation representation as a vector z .",
"Next, we describe our two proposed structural scaffolds for citation intent prediction.",
"In scientific writing there is a connection between the structure of scientific papers and the intent of citations.",
"To leverage this connection for more effective classification of citation intents, we propose a multitask framework with two structural scaffolds (auxiliary tasks) related to the structure of scientific documents.",
"A key point for our proposed scaffolds is that they do not need any additional manual annotation as labels for these tasks occur naturally in scientific writing.",
"The structural scaffolds in our model are the following: 2 In our experiments BiGRUs resulted in similar performance.",
"3 We also experimented BiLSTMs without attention; we found that BiLSTMs/BiGRUs along with attention provided best results.",
"Other types of attention such as additive attention result in similar performance.",
"Citation worthiness.",
"The first scaffold task that we consider is citation worthiness of a sentence, indicating whether a sentence needs a citation.",
"The language expressed in citation sentences is likely distinctive from regular sentences in scientific writing, and such information could also be useful for better language modeling of the citation contexts.",
"To this end, using citation markers such as [12] or Lee et al (2010), we identify sentences in a paper that include citations and the negative samples are sentences without citation markers.",
"The goal of the model for this task is to predict whether a particular sentence needs a citation.",
"4 Section title.",
"The second scaffold task relates to predicting the section title in which a citation appears.",
"Scientific documents follow a standard structure where the authors typically first introduce the problem, describe methodology, share results, discuss findings and conclude the paper.",
"The intent of a citation could be relevant to the section of the paper in which the citation appears.",
"For example, method-related citations are more likely to appear in the methods section.",
"Therefore, we use the section title prediction as a scaffold for predicting citation intents.",
"Note that this scaffold task is different than simply adding section title as an additional feature in the input.",
"We are using the section titles from a larger set of data than training data for the main task as a proxy to learn linguistic patterns that are helpful for citation intents.",
"In particular, we leverage a large number of scientific papers for which the section information is known for each citation to automatically generate large amounts of training data for this scaffold task.",
"5 Multitask formulation.",
"Multitask learning as defined by Caruana (1997) is an approach to inductive transfer learning that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias.",
"It requires the model to have at least some sharable parameters between the tasks.",
"In a general setting in our model, we have a main task T ask (1) and n 1 auxiliary tasks T ask ( i ) .",
"As shown in Figure 2, each scaffold task will have its task-specific parameters for effective classifica-4 We note that this task may also be useful for helping authors improve their paper drafts.",
"However, this is not the focus of this work.",
"5 We also experimented with adding section titles as additional feature to the input, however, it did not result in any improvements.",
"tion and the parameters for the lower layers of the network are shared across tasks.",
"We use a Multi Layer Perceptron (MLP) for each task and then a softmax layer to obtain prediction probabilites.",
"In particular, given the vector z we pass it to n MLPs and obtain n output vectors y ( i ) : y ( i ) = softmax(MLP ( i ) ( z )) We are only interested in the output y (1) and the rest of outputs ( y (2) , ..., y ( n ) ) are regarding the scaffold tasks and only used in training to inform the model of knowledge in the structure of the scientific documents.",
"For each task, we output the class with the highest probability in y .",
"An alternative inference method is to sample from the output distribution.",
"Let D 1 be the labeled dataset for the main task T ask (1) , and D i denote the labeled datasets corresponding to the scaffold task T ask ( i ) where i { 2 , ..., n } .",
"Similarly, let L 1 and L i be the main loss and the loss of the auxiliary task i , respectively.",
"The final loss of the model is: L = (cid:88) ( x , y ) D 1 L 1 ( x , y ) + n (cid:88) i =2 i (cid:88) ( x , y ) D i L i ( x , y ) , (1) where i is a hyper-parameter specifying the sensitivity of the parameters of the model to each specific task.",
"Here we have two scaffold tasks and hence n =3 .",
"i could be tuned based on performance on validation set (see 4 for details).",
"We train this model jointly across tasks and in an end-to-end fashion.",
"In each training epoch, we construct mini-batches with the same number of instances from each of the n tasks.",
"We compute the total loss for each mini-batch as described in Equation 1, where L i =0 for all instances of other tasks j (cid:54) = i .",
"We compute the gradient of the loss for each mini-batch and tune model parameters using the AdaDelta optimizer (Zeiler, 2012) with gradient clipping threshold of 5.0.",
"We stop training the model when the development macro F1 score does not improve for five consecutive epochs.",
"We compare our results on two datasets from different scientific domains.",
"While there has been a long history of studying citation intents, there are only a few existing publicly available datasets on Intent cateogry Definition Example Background information The citation states, mentions, or points to the background information giving more context about a problem, concept, approach, topic, or importance of the problem in the field.",
"the task of citation intent classification.",
"We use the most recent and comprehensive (ACL-ARC citations dataset) by Jurgens et al. (2018) as a benchmark dataset to compare the performance of our model to previous work.",
"In addition, to address the limited scope and size of this dataset, we introduce SciCite, a new dataset of citation intents that addresses multiple scientific domains and is more than five times larger than ACL-ARC.",
"Below is a description of both datasets.",
"ACL-ARC is a dataset of citation intents released by Jurgens et al. (2018).",
"The dataset is based on a sample of papers from the ACL Anthology Reference Corpus (Bird et al., 2008) and includes 1,941 citation instances from 186 papers and is annotated by domain experts in the NLP field.",
"The data was split into three standard stratified sets of train, validation, and test with 85% of data used for training and remaining 15% divided equally for validation and test.",
"Each citation unit includes information about the immediate citation context, surrounding context, as well as information about the citing and cited paper.",
"The data includes six intent categories outlined in Table",
"2. 3.2 SciCite dataset Most existing datasets contain citation categories that are too fine-grained.",
"Some of these intent categories are very rare or not useful in meta analysis of scientific publications.",
"Since some of these fine-grained categories only cover a minimal percentage of all citations, it is difficult to use them to gain insights or draw conclusions on impacts of papers.",
"Furthermore, these datasets are usually domain-specific and are relatively small (less than 2,000 annotated citations).",
"To address these limitations, we introduce SciCite, a new dataset of citation intents that is significantly larger, more coarse-grained and general-domain compared with existing datasets.",
"Through examination of citation intents, we found out many of the categories defined in previous work such as motivation, extension or future work, can be considered as background information providing more context for the current research topic.",
"More interesting intent categories are a direct use of a method or comparison of results.",
"Therefore, our dataset provides a concise annotation scheme that is useful for navigating research topics and machine reading of scientific papers.",
"We consider three intent categories outlined in Table 1: BACKGROUND , METHOD and RESULTCOMPARISON .",
"Below we describe data collection and annotation details.",
"Citation intent of sentence extractions was labeled through the crowdsourcing platform Figure Eight.",
"6 We selected a sample of papers from the Semantic Scholar corpus, 7 consisting of papers in general computer science and medicine domains.",
"Citation contexts were extracted using science-6 https://www.figure-eight.com/ platform/ 7 https://semanticscholar.org/ parse.",
"8 The annotators were asked to identify the intent of a citation, and were directed to select among three citation intent options: METHOD , RESULTCOMPARISON and BACKGROUND .",
"The annotation interface also included a dummy option OTHER which helps improve the quality of annotations of other categories.",
"We later removed instances annotated with the OTHER option from our dataset (less than 1% of the annotated data), many of which were due to citation contexts which are incomplete or too short for the annotator to infer the citation intent.",
"We used 50 test questions annotated by a domain expert to ensure crowdsource workers were following directions and disqualify annotators with accuracy less than 75%.",
"Furthermore, crowdsource workers were required to remain on the annotation page (five annotations) for at least ten seconds before proceeding to the next page.",
"Annotations were dynamically collected.",
"The annotations were aggregated along with a confidence score describing the level of agreement between multiple crowdsource workers.",
"The confidence score is the agreement on a single instance weighted by a trust score (accuracy of the annotator on the initial 50 test questions).",
"To only collect high quality annotations, instances with confidence score of 0.7 were discarded.",
"In addition, a subset of the dataset with 100 samples was re-annotated by a trained, expert annotator to check for quality, and the agreement rate with crowdsource workers was 86% .",
"Citation contexts were annotated by 850 crowdsource workers who made a total of 29,926 annotations and individually made between 4 and 240 annotations.",
"Each sentence was annotated, on average, 3.74 times.",
"This resulted in a total 9,159 crowdsourced instances which were divided to training and validation sets with 90% of the data used for the training set.",
"In addition to the crowdsourced data, a separate test set of size 1,861 was annotated by a trained, expert annotator to ensure high quality of the dataset.",
"For the first scaffold (citation worthiness), we sample sentences from papers and consider the sentences with citations as positive labels.",
"We also remove the citation markers from those sentences 8 https://github.com/allenai/ science-parse such as numbered citations (e.g., [1]) or name-year combinations (e.g, Lee et al (2012)) to not make the second task artificially easy by only detecting citation markers.",
"For the second scaffold (cita-tion section title), respective to each test dataset, we sample citations from the ACL-ARC corpus and Semantic Scholar corpus 9 and extract the citation context as well as their corresponding sections.",
"We manually define regular expression patterns mappings to normalized section titles: in-troduction, related work, method, experi-ments, conclusion.",
"Section titles which did not map to any of the aforementioned titles were excluded from the dataset.",
"Overall, the size of the data for scaffold tasks on the ACL-ARC dataset is about 47K (section title scaffold) and 50K (ci-tation worthiness) while on SciCite is about 91K and 73K for section title and citation worthiness scaffolds, respectively.",
"We implement our proposed scaffold framework using the AllenNLP library (Gardner et al., 2018).",
"For word representations, we use 100-dimensional GloVe vectors (Pennington et al., 2014) trained on a corpus of 6B tokens from Wikipedia and Gi-gaword.",
"For contextual representations, we use ELMo vectors released by Peters et al. (2018) 10 with output dimension size of 1,024 which have been trained on a dataset of 5.5B tokens.",
"We use a single-layer BiLSTM with a hidden dimension size of 50 for each direction 11 .",
"For each of scaffold tasks, we use a single-layer MLP with 20 hidden nodes , ReLU (Nair and Hinton, 2010) activation and a Dropout rate (Srivastava et al., 2014) of 0.2 between the hidden and input layers.",
"The hyperparameters i are tuned for best performance on the validation set of the respective datasets using a 0.0 to 0.3 grid search.",
"For example, the following hyperparameters are used for the ACL-ARC.",
"Citation worthiness saffold: 2 =0 .",
"08 , 3 =0 , section title scaffold: 3 =0 .",
"09 , 2 =0 ; both scaffolds: 2 =0 .",
"1 , 3 =0 .",
"05 .",
"Batch size is 8 for ACL-ARC dataset and 32 for SciCite dataset (re-call that SciCite is larger than ACL-ARC).",
"We 9 https://semanticscholar.org/ 10 https://allennlp.org/elmo 11 Experiments with other types of RNNs such as BiGRUs and more layers showed similar or slightly worst performance use Beaker 12 for running the experiments.",
"On the smaller dataset, our best model takes approximately 30 minutes per epoch to train (training time without ELMo is significantly faster).",
"It is known that multiple runs of probabilistic deep learning models can have variance in overall scores (Reimers and Gurevych, 2017) 13 .",
"We control this by setting random-number generator seeds; the reported overall results are average of multiple runs with different random seeds.",
"To facilitate reproducibility, we release our code, data, and trained models.",
"14 4.2 Baselines We compare our results to several baselines including the model with state-of-the-art performance on the ACL-ARC dataset.",
"BiLSTM Attention (with and without ELMo) .",
"This baseline uses a similar architecture to our proposed neural multitask learning framework, except that it only optimizes the network for the main loss regarding the citation intent classification ( L 1 ) and does not include the structural scaffolds.",
"We experiment with two variants of this model: with and without using the contextualized word vector representations (ELMo) of Peters et al. (2018).",
"This baseline is useful for evaluating the effect of adding scaffolds in controlled experiments.",
"Jurgens et al. (2018) .",
"To make sure our results are competitive with state-of-the-art results on this task, we also compare our model to Jurgens et al. (2018) which has the best reported results on the ACL-ARC dataset.",
"Jurgens et al. (2018) incorporate a variety of features, ranging from pattern-based features to topic-modeling features, to citation graph features.",
"They also incorporate section titles and relative section position in the paper as features.",
"Our implementation of this model achieves a macro-averaged F1 score of 0.526 using 10-fold cross-validation, which is in line with the highest reported results in Jurgens et al. (2018): 0.53 using leave-one-out cross validation.",
"We were not able to use 12 Beaker is a collaborative platform for reproducible research ( https://github.com/allenai/beaker ) 13 Some CuDNN methods are non-deterministic and the rest are only deterministic under the same underlying hardware.",
"See https://docs.",
"nvidia.com/deeplearning/sdk/pdf/cuDNN-Developer-Guide.pdf 14 https://github.com/allenai/scicite Model macro F1 B a s e li n e s BiLSTM-Attn 51.8 BiLSTM-Attn w/ ELMo 54.3 Previous SOTA (Jurgens et al., 2018) 54.6 T h i s w o r k BiLSTM-Attn + section title scaffold 56.9 BiLSTM-Attn + citation worthiness scaffold 56.3 BiLSTM-Attn + both scaffolds 63.1 BiLSTM-Attn w/ ELMo + both scaffolds 67.9 Table 3: Results on the ACL-ARC citations dataset.",
"leave-one-out cross validation in our experiments since it is impractical to re-train each variant of our deep learning models thousands of times.",
"Therefore, we opted for a standard setup of stratified train/validation/test data splits with 85% data used for training and the rest equally split between validation and test.",
"Our main results for the ACL-ARC dataset (Jur-gens et al., 2018) is shown in Table",
"3. We observe that our scaffold-enhanced models achieve clear improvements over the state-of-the-art approach on this task.",
"Starting with the BiLSTM-Attn' baseline with a macro F1 score of 51.8, adding the first scaffold task in BiLSTM-Attn + section title scaffold' improves the F1 score to 56.9 ( =5 . 1 ).",
"Adding the second scaffold in BiLSTM-Attn + citation worthiness scaffold' also results in similar improvements: 56.3 ( =4 . 5 ).",
"When both scaffolds are used simultaneously in BiLSTM-Attn + both scaffolds', the F1 score further improves to 63.1 ( =11 . 3 ), suggesting that the two tasks provide complementary signal that is useful for citation intent prediction.",
"The best result is achieved when we also add ELMo vectors (Peters et al., 2018) to the input representations in BiLSTM-Attn w/ ELMo + both scaffolds', achieving an F1 of 67.9, a major improvement from the previous state-of-the-art results of Jurgens et al. (2018) 54.6 ( =13 . 3 ).",
"We note that the scaffold tasks provide major contributions on top of the ELMo-enabled baseline ( = 13.6), demonstrating the efficacy of using structural scaffolds for citation intent prediction.",
"We note that these results were obtained without using hand-curated features or additional linguistic resources as used in Jurgens et al. (2018).",
"We also experimented with adding features used in Jurgens et al. (2018) to our best model and not only we did not see any improvements, but we observed Model macro F1 B a s e li n e s BiLSTM-Attn 77.2 BiLSTM-Attn w/ ELMo 82.6 Previous SOTA (Jurgens et al., 2018) 79.6 T h i s w o r k BiLSTM-Attn + section title scaffold 77.8 BiLSTM-Attn + citation worthiness scaffold 78.1 BiLSTM-Attn + both scaffolds 79.1 BiLSTM-Attn w/ ELMo + both scaffolds 84.0 Table 4: Results on the SciCite dataset.",
"at least 1.7% decline in performance.",
"This suggests that these additional manual features do not provide the model with any additional useful signals beyond what the model already learns from the data.",
"Table 4 shows the main results on SciCite dataset, where we see similar patterns.",
"Each scaffold task improves model performance.",
"Adding both scaffolds results in further improvements.",
"And the best results are obtained by using ELMo representation in addition to both scaffolds.",
"Note that this dataset is more than five times larger in size than the ACL-ARC, therefore the performance numbers are generally higher and the F1 gains are generally smaller since it is easier for the models to learn optimal parameters utilizing the larger annotated data.",
"On this dataset, the best baseline is the neural baseline with addition of ELMo contextual vectors achieving an F1 score of 82.6 followed by Jurgens et al. (2018), which is expected because neural models generally achieve higher gains when more training data is available and because Jurgens et al. (2018) was not designed with the SciCite dataset in mind.",
"The breakdown of results by intent on ACL-ARC and SciCite datasets is respectively shown in Tables 5 and 6.",
"Generally we observe that results on categories with more number of instances are higher.",
"For example on ACL-ARC, the results on the BACKGROUND category are the highest as this category is the most common.",
"Conversely, the results on the FUTUREWORK category are the lowest.",
"This category has the fewest data points (see distribution of the categories in Table 2) and thus it is harder for the model to learn the optimal parameters for correct classification in this category.",
"To gain more insight into why the scaffolds are helping the model in improved citation intent classification, we examine the attention weights assigned to inputs for our best proposed model",
"(b) Example from SciCite: Correct label is RESULTCOMPARISON ; our model correctly predicts it, while baseline considers it as BACKGROUND .",
"(BiLSTM-Attn w/ ELMo + both scaffolds') compared with the best neural baseline (BiLSTM-Attn w/ ELMO').",
"We conduct this analysis for examples from both datasets.",
"Figure 3 shows an example input citation along with the horizontal line and the heatmap of attention weights for this input resulting from our model versus the baseline.",
"For first example (3a) the true label is FUTUREWORK .",
"We observe that our model puts more weight on words surrounding the word fu-ture which is plausible given the true label.",
"On the other hand, the baseline model attends most to the words compare and consequently incorrectly predicts a COMPARE label.",
"In second example (3b) the true label is RESULTCOMPARISON .",
"The baseline incorrectly classifies it as a BACKGROUND , likely due to attending to another part of the sentence (analyzed seprately).",
"Our model correctly classifies this instance by putting more attention weights on words that relate to comparison of the results.",
"This suggests that the our model is more successful in learning optimal parameters for representing the citation text and classifying its respective intent compared with the baseline.",
"Note that the only difference between our model and the neural baseline is inclusion of the structural scaffolds.",
"Therefore, suggesting the effectiveness the scaffolds in informing the main task of relevant signals for citation intent classification.",
"Error analysis.",
"We next investigate errors made by our best model (Figure 4 plots classification er-rors).",
"One general error pattern is that the model has more tendency to make false positive errors in the BACKGROUND category likely due to this category dominating both datasets.",
"It's interesting that for the ACL-ARC dataset some prediction Category(#instances) Background(71) Compare(25) Extension(5) Future(5) Motivation(7) Use(26) Average(Macro) P R F1 P R F1 P R F1 P R F1 P R F1 P R F1 P R F1 BiLSTM-Attn 78.6 77.5 78.0 44.8 52.0 48.1 50.0 40.0 44.4 33.3 40.0 36.4 50.0 28.6 36.4 65.4 65.4 65.4 53.7 50.6 51.5 BiLSTM-Attnw/ELMo 76.5 87.3 81.6 59.1 52.0 55.3 66.7 40.0 50.0 33.3 40.0 36.4 50.0 28.6 36.4 69.6 61.5 65.3 59.2 51.6 54.2 PreviousSOTA(Jurgensetal.,2018) 75.6 87.3 81.1 70.6 48.0 57.1 66.7 40.0 50.0 50.0 20.0 28.6 75.0 42.9 54.6 51.6 61.5 56.1 64.9 49.9 54.6 BiLSTM-Attn+sectiontitlescaffold 77.2 85.9 81.3 53.8 56.0 54.9 100.0 40.0 57.1 33.3 40.0 36.4 50.0 28.6 36.4 81.8 69.2 75.0 66.0 53.3 56.9 BiLSTM-Attn+citationworthinessscaffold 77.1 90.1 83.1 59.1 52.0 55.3 100.0 40.0 57.1 28.6 40.0 33.3 50.0 28.6 36.4 81.0 65.4 72.3 66.0 52.7 56.3 BiLSTM-Attn+bothscaffolds 77.6 93.0 84.6 65.0 52.0 57.8 100.0 60.0 75.0 40.0 40.0 40.0 75.0 42.9 54.5 72.7 61.5 66.7 71.7 58.2 63.1 BiLSTM-Attn+bothscaffolds/wELMo 75.9 93.0 83.5 80.0 64.0 71.1 75.0 60.0 66.7 75.0 60.0 66.7 100.0 28.6 44.4 81.8 69.2 75.0 81.3 62.5 67.9 Table 5: Detailed per category classification results on ACL-ARC dataset.",
"errors are due to the model failing to properly differentiate the USE category with BACKGROUND .",
"We found out that some of these errors would have been possibly prevented by using additional context.",
"Table 7 shows a sample of such classification errors.",
"For the citation in the first row of the table, the model is likely distracted by model in (citation) and ILP formulation from (citation) deeming the sentence is referring to the use of another method from a cited paper and it misses the first part of the sentence describing the motivation.",
"This is likely due to the small number of training instances in the MOTIVATION category, preventing the model to learn such nuances.",
"For the examples in the second and third row, it is not clear if it is possible to make the correct prediction without additional context.",
"And similarly in the last row the instance seems ambiguous without accessing to additional context.",
"Similarly as shown in Figure 4a two of FUTUREWORK labels are wrongly classified.",
"One of them is illustrated in the forth row of Table 7 where perhaps additional context could have helped the model in identifying the correct label.",
"One possible way to prevent this type of errors, is to provide the model with an additional input, modeling the extended surrounding context.",
"We experimented with encoding the extended surrounding context using a BiLSTM and concatenating it with the main citation context vector (z), but it resulted in a large decline in overall performance likely due to the overall noise introduced by the additional context.",
"A possible future work is to investigate alternative effective approaches for incorporating the surrounding extended context.",
"There is a large body of work studying the intent of citations and devising categorization systems (Stevens and Giuliano, 1965; Moravcsik and Mu-rugesan, 1975; Garzone and Mercer, 2000; White, 2004; Ahmed et al., 2004; Teufel et al., 2006; Agarwal et al., 2010; Dong and Schafer, 2011).",
"Most of these efforts provide citation categories that are too fine-grained, some of which rarely occur in papers.",
"Therefore, they are hardly useful for automated analysis of scientific publications.",
"To address these problems and to unify previous u s e f u t r b c k g e x t n c o m p m o t v Predicted label use futr bckg extn comp motv T r u e l a b e l 1 7 0 0 0 0 1 0 1 0 2 0 0 2 1 1 0 1 0 0 0 0 5 1 1 2 0 3 0 0",
"efforts, in a recent work, Jurgens et al. (2018) proposed a six category system for citation intents.",
"In this work, we focus on two schemes: (1) the scheme proposed by Jurgens et al. (2018) and (2) an additional, more coarse-grained general-purpose category system that we propose (details in 3).",
"Unlike other schemes that are domain-specific, our scheme is general and naturally fits in scientific discourse in multiple domains.",
"Early works in automated citation intent classification were based on rule-based systems (e.g., (Garzone and Mercer, 2000; Pham and Hoffmann, 2003)).",
"Later, machine learning methods based on linguistic patterns and other hand-engineered features from citation context were found to be effective.",
"For example, Teufel et al. (2006) proposed use of cue phrases, a set of expressions that talk about the act of presenting research in a paper.",
"Abu-Jbara et al. (2013) relied on lexical, structural, and syntactic features and a linear SVM for classification.",
"Researchers have also investigated methods of finding cited spans in the cited papers.",
"Examples include feature-based methods (Cohan et al., 2015), domain-specific knowledge (Cohan and Goharian, 2017), and a recent CNN-based model for joint prediction of cited spans and citation function (Su et al., 2018).",
"We also experimented with CNNs but found the attention BiLSTM model to work significantly better.",
"Jurgens et al. (2018) expanded all pre-existing feature-based efforts on citation intent classification by proposing a comprehensive set of engineered features, including boostrapped patterns, topic modeling, dependency-based, and metadata features for the task.",
"We argue that we can capture necessary information from the citation context using a data driven method, without the need for hand-engineered domain-dependent features or external resources.",
"We propose a novel scaffold neural model for citation intent classification to incorporate structural information of scientific discourse into citations, borrowing the scaffold terminology from Swayamdipta et al. (2018) who use auxiliary syntactic tasks for semantic problems.",
"In this work, we show that structural properties related to scientific discourse can be effectively used to inform citation intent classification.",
"We propose a multitask learning framework with two auxiliary tasks (predicting section titles and citation worthiness) as two scaffolds related to the main task of citation intent prediction.",
"Our model achieves state-of-the-art result (F1 score of 67.9%) on the ACL-ARC dataset with 13.3 absolute increase over the best previous results.",
"We additionally introduce SciCite, a new large dataset of citation intents and also show the effectiveness of our model on this dataset.",
"Our dataset, unlike existing datasets that are designed based on a specific domain, is more general and fits in scientific discourse from multiple scientific domains.",
"We demonstrate that carefully chosen auxiliary tasks that are inherently relevant to a main task can be leveraged to improve the performance on the main task.",
"An interesting line of future work is to explore the design of such tasks or explore the properties or similarities between the auxiliary and the main tasks.",
"Another relevant line of work is adapting our model to other domains containing documents with similar linked structured such as Wikipedia articles.",
"Future work may benefit from replacing ELMo with other types of contextualized representations such as BERT in our scaffold model.",
"For example, at the time of finalizing the camera ready version of this paper, Beltagy et al. (2019) showed that a BERT contextualized representation model (Devlin et al., 2018) trained on scientific text can achieve promising results on the SciCite dataset.",
"We thank Kyle Lo, Dan Weld, and Iz Beltagy for helpful discussions, Oren Etzioni for feedback on the paper, David Jurgens for helping us with their ACL-ARC dataset and reproducing their results, and the three anonymous reviewers for their comments and suggestions.",
"Computations on beaker.org were supported in part by credits from Google Cloud."
] | [
"abstain",
"objective",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"method",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"result",
"objective",
"result",
"objective",
"method",
"objective",
"abstain",
"abstain",
"method",
"result",
"other",
"other"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.